139 94 12MB
English Pages 486 [463] Year 2022
Springer Proceedings in Complexity
Marcin Czupryna Bogumił Kamiński Editors
Advances in Social Simulation Proceedings of the 16th Social Simulation Conference, 20–24 September 2021
Springer Proceedings in Complexity
Springer Proceedings in Complexity publishes proceedings from scholarly meetings on all topics relating to the interdisciplinary studies of complex systems science. Springer welcomes book ideas from authors. The series is indexed in Scopus. Proposals must include the following: – – – – –
name, place and date of the scientific meeting a link to the committees (local organization, international advisors etc.) scientific description of the meeting list of invited/plenary speakers an estimate of the planned proceedings book parameters (number of pages/articles, requested number of bulk copies, submission deadline) Submit your proposals to: [email protected]
More information about this series at https://link.springer.com/bookseries/11637
Marcin Czupryna · Bogumił Kami´nski Editors
Advances in Social Simulation Proceedings of the 16th Social Simulation Conference, 20–24 September 2021
Editors Marcin Czupryna Financial Markets Cracow University of Economics Kraków, Poland
Bogumił Kami´nski SGH Warsaw School of Economics Warsaw, Poland
ISSN 2213-8684 ISSN 2213-8692 (electronic) Springer Proceedings in Complexity ISBN 978-3-030-92842-1 ISBN 978-3-030-92843-8 (eBook) https://doi.org/10.1007/978-3-030-92843-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Social Simulation is a cross-disciplinary research discipline that aims to solve research problems in the social sciences using computational methods. The use of simulation allows researchers to adequately represent complex mechanisms guiding behaviours of social actors and explain emergent phenomena on the level of whole societies. This book is the conference proceedings of SSC 2021 in the 16th annual Social Simulation Conference. The conference is one of the essential activities of the European Social Simulation Association (ESSA), aimed at promoting social simulation and computational social science. The annual series of Social Simulation Conferences serves as a unique venue for exchanging ideas and discussing cutting-edge research in the field, both theoretical and applied. This year, SSC 2021 attracted 113 submissions from North and South Americas, Asia and Europe. The conference was organised into one general track, the following special tracks: Artificial Sociality, Complex networks and complexity finance, Computational approaches to management and organisation theory, Heterogeneous and diverse economic agents, High-performance computing challenges in social simulation, Norms and institutions in the social environment, Simulating the dynamics of multilingualism and language policy issues, Social identity approach modelling, Social simulation and games, Social simulation for local actors empowerment, Using qualitative data to inform behavioural rules, Value-driven behaviour in agent-based models and ESSA@work session. From all submissions to the SSC 2021 Conference, 34 papers were selected for publication in the proceedings. All the papers underwent a double-blind review process. The proceedings contain the revised versions of the submissions. This book consists of five parts covering virtually all areas of cutting-edge social simulation research. The first part, entitled Sociality, norms and institutions, consists of seven first articles, presenting topics related to social aspects as social diffusion, social norms and mobility. The second part, entitled Heterogeneous and diverse economic agents, consists of the six articles that further enrich economic models with a higher level of heterogeneity and diversity among agents. The third part, entitled Social simulation and games, consists of the four articles presenting the v
vi
Preface
topics related to gamification and social simulations. The fourth part, entitled Social simulations—theory and applications, consists of fifteen texts and covers a wide range of topics related to theoretical, fundamental problems of social simulations. It also provides valuable examples of the applications to solving diverse problems. The last, fifth, part, entitled Using qualitative data to inform behavioural rules, consists of two articles on agent-based simulations as a possible vehicle for bridging the gap between qualitative evidence (e.g. texts gained from transcribing oral data or observations of people) and quantitative evidence. We want to express our thanks to the members of the Programme Committee and organisers of special tracks, workshops and ESSA@work. Their support in reviewing the papers included in the proceedings and organisation of the conference was invaluable. We would like to name the members of the program committee by name, once again, thanking them for their contribution: Diana Adamatti, Petra Ahrweiler, Fred Amblard, Marek Antosiewicz, Luis Antunes, Jennifer Badham, Federico Bianchi, Mike Bithell, Riccardo Boero, Melania Borit, Alessandro Caiani, Dino Carpentras, Emile Chappin, Kevin Chapuis, Edmund Chattoe-Brown, Marco Civico, Natalie Davis, Guillaume Deffuant, Frank Dignum, Paola D’Orazio, Bruce Edmonds, Corinna Elsenbroich, Andreas Flache, Christopher Frantz, Cesar GarcíaDíaz, Amineh Ghorbani, Francesca Giardini, Nigel Gilbert, Nick Gotts, Laszlo Gulyas, Rainer Hegselmann, Gert Jan Hofstede, Sascha Holzhauer, Wander Jager, Peter Johnson, Andreas Koch, Friedrich Krebs, Francesco Lamperti, Stephan Leitner, Silvia Leoni, Michael Mäs, Ruth Meyer, Selcan Mutgan, Kavin Narasimhan, Martin Neumann, Paweł Oleksy, Jonathan Ozik, Bartosz Pankratz, Nicolas Payette, Gary Polhill, Lilit Popoyan, Jessica Reale, Michael Roos, Juliette Rouchier, Jordi SabaterMir, Geeske Scholz, Tobias Schroeder, Davide Secchi, Roman Seidl, Leron Shults, Małgorzata Snarska, Flaminio Squazzoni, Timo Szczepanska, Przemysław Szufel, Klaus G. Troitzsch, Harko Verhagen, Friederike Wall and Nanda Wijermans. We hope that you will find the proceedings interesting and inspiring. The scope of the papers included in the proceedings encompassing 34 articles is comprehensive. This collection shows that social simulation as a research field, while already mature and well established, still offers abundant opportunities for exploring new scientific avenues. The special theme of the SSC 2021 Conference was “Social Simulation geared towards Post-Pandemic times”. The conference was focused on questions raised by the current pandemic and on future challenges related to economic recovery, such as localisation, globalisation, inequality, sustainable growth and social changes induced by progressive digitalisation, data availability and artificial intelligence. We strongly believe that the included papers show that the ideas born in research laboratories are used and useful to tackle real-life challenges and work towards the common good of societies. Kraków, Poland Warsaw, Poland
Marcin Czupryna Bogumił Kami´nski
Contents
Sociality, Norms and Institutions Efficient Redistribution of Scarce Resources Favours Hierarchies . . . . . . Rob M. A. Nelissen, Ivet Andès Muñoz, David Cristobal Muñoz, Mark R. Kramer, and Gert Jan Hofstede Evaluation of COVID-19 Infection Prevention Measures Compatible with Local Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hideyuki Nagai and Setsuya Kurahashi DIPP: Diffusion of Privacy Preferences in Online Social Networks . . . . . Albert Mwanjesa, Onuralp Ulusoy, and Pınar Yolum
3
15 29
Explaining and Resolving Norm-Behavior Inconsistencies—A Theoretical Agent-Based Model . . . . . . . . . . . . . . . . . . . Marlene C. L. Batzke and Andreas Ernst
41
Towards More Realism in Pedestrian Behaviour Models: First Steps and Considerations in Formalising Social Identity . . . . . . . . . . . . . . . Nanda Wijermans and Anne Templeton
53
Developing a Stakeholder-Centric Simulation Tool to Support Integrated Mobility Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diego Dametto, Gabriela Michelini, Leonard Higi, Tobias Schröder, Daniel Klaperski, Roy Popiolek, Anne Tauch, and Antje Michel Support Local Empowerment Using Various Modeling Approaches and Model Purposes: A Practical and Theoretical Point of View . . . . . . . Kevin Chapuis, Marie-Paule Bonnet, Neriane da Hora, Jôine Cariele Evangelista-Vale, Nina Bancel, Gustavo Melo, and Christophe Le Page
65
79
vii
viii
Contents
Heterogeneous and Diverse Economic Agents Exploring Coevolutionary Dynamics Between Infinitely Diverse Heterogenous Adaptive Automated Trading Agents . . . . . . . . . . . . . . . . . . . Nik Alexandrov, Dave Cliff, and Charlie Figuero
93
Pay-for-Performance and Emerging Search Behavior: When Exploration Serves to Reduce Alterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Friederike Wall Effects of Limited and Heterogeneous Memory in Hidden-Action Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Patrick Reinwald, Stephan Leitner, and Friederike Wall Autonomous Group Formation of Heterogeneous Agents in Complex Task Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Darío Blanco-Fernández, Stephan Leitner, and Alexandra Rausch Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards: Insights from an Agent-Based Computational Economic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Alessandro Taberna, Tatiana Filatova, Andrea Roventini, and Francesco Lamperti Dynamics of Wealth Inequality in Simple Artificial Societies . . . . . . . . . . . 161 John C. Stevenson Social Simulation and Games Cruising Drivers’ Response to Changes in Parking Prices in a Serious Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Sharon Geva and Eran Ben-Elia Quantum Leaper: A Methodology Journey From a Model in NetLogo to a Game in Unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Timo Szczepanska, Andreas Angourakis, Shawn Graham, and Melania Borit How Perceived Complexity Impacts on Comfort Zones in Social Decision Contexts—Combining Gamification and Simulation for Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Frederick Herget, Benedikt Kleppmann, Petra Ahrweiler, Jan Gruca, and Martin Neumann A Hybrid Agent-Based Model to Simulate and Re-Think Post-COVID-19 Use Processes in Educational Facilities . . . . . . . . . . . . . . . 217 Davide Simeone, Silvia Mastrolembo Ventura, Sara Comai, and Angelo L. C. Ciribini
Contents
ix
Social Simulations—Theory and Applications Generation Gaps: An Agent-Based Model of Opinion Shifts Among Cohorts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Ivan Puga-Gonzalez and F. LeRon Shults Comparison of Viral Information Spreading Strategies in Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Sri Sailesh Meegada and Subu Kandaswamy An Evidence-Driven Model of Voting and Party Competition . . . . . . . . . . 261 Ruth Meyer, Marco Fölsch, Martin Dolezal, and Reinhard Heinisch Shrinking Housing’s Size: Using Agent-Based Modelling to Explore Measures for a Reduction of Floor Area Per Capita . . . . . . . . 275 Anna Pagani, Francesco Ballestrazzi, and Claudia R. Binder The Problem with Bullying: Lessons Learned from Modelling Marginalization with Diverse Stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Themis Dimitra Xanthopoulou, Andreas Prinz, and F. LeRon Shults On the Impact of Misvaluation on Bilateral Trading . . . . . . . . . . . . . . . . . . 301 Sacha Bourgeois-Gironde and Marcin Czupryna Consumer Participation in Demand Response Programs: Development of a Consumat-Based Toy Model . . . . . . . . . . . . . . . . . . . . . . . 315 Judith Schwarzer and Dominik Engel Using MBTI Agents to Simulate Human Behavior in a Work Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Luiz Fernando Braz and Jaime Simão Sichman Exposure to Non-exhaust Emission in Central Seoul Using an Agent-based Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Hyesop Shin and Mike Bithell Theoretical Sampling and Qualitative Empirical Model Validation . . . . . 355 Georg P. Mueller The Large-Scale, Systematic and Iterated Comparison of Agent-Based Policy Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Mike Bithell, Edmund Chattoe-Brown, and Bruce Edmonds Simulating Delay in Seeking Treatment for Stroke Due to COVID-19 Concerns with a Hybrid Agent-Based and Equation-Based Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Elizabeth Hunter, Bryony L. McGarry, and John D. Kelleher
x
Contents
Modelling Energy Security: The Case of Dutch Urban Energy Communities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Javanshir Fouladvand, Deline Verkerk, Igor Nikolic, and Amineh Ghorbani Towards Efficient Context-Sensitive Deliberation . . . . . . . . . . . . . . . . . . . . . 409 Maarten Jensen, Harko Verhagen, Loïs Vanhée, and Frank Dignum Better Representing the Diffusion of Innovation Through the Theory of Planned Behavior and Formal Argumentation . . . . . . . . . . 423 Loic Sadou, Stéphane Couture, Rallou Thomopoulos, and Patrick Taillandier Using Qualitative Data to Inform Behavioural Rules Documenting Data Use in a Model of Pandemic “Emotional Contagion” Using the Rigour and Transparency Reporting Standard (RAT-RS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Patrycja Antosz, Ivan Puga-Gonzalez, F. LeRon Shults, Justin E. Lane, and Roger Normann A Methodology to Develop Agent-Based Models for Policy Design in Socio-Technical Systems Based on Qualitative Inquiry . . . . . . . . . . . . . . 453 Vittorio Nespeca, Tina Comes, and Frances Brazier Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
Contributors
Petra Ahrweiler Johannes Gutenberg University Mainz, Mainz, Germany Nik Alexandrov Department of Computer Science, University of Bristol, Bristol, UK Andreas Angourakis McDonald Institute for Archaeological Research, University of Cambridge, Cambridge, UK Patrycja Antosz Center for Modeling Social Systems, NORCE, Universitetsveien 19, Kristiansand, Norway Francesco Ballestrazzi École Polytechnique Fédérale de Lausanne EPFL, Lausanne, Switzerland Nina Bancel UMR Sens, CIRAD, Montpellier, France Marlene C. L. Batzke Center for Environmental Systems Research (CESR), University of Kassel, Kassel, Germany Eran Ben-Elia Ben-Gurion University of the Negev, Be’er Sheva, Israel Claudia R. Binder École Polytechnique Fédérale de Lausanne EPFL, Lausanne, Switzerland Mike Bithell Department of Geography, University of Cambridge, Cambridge, UK Darío Blanco-Fernández University of Klagenfurt, Klagenfurt, Austria Marie-Paule Bonnet UMR 228 ESPACE-DEV, IRD, University of Montpellier, Montpellier, France Melania Borit Norwegian College of Fishery Science, UiT The Arctic University of Norway, Tromsø, Norway Sacha Bourgeois-Gironde Université Paris II, Paris, France Luiz Fernando Braz Laboratório de Técnicas Inteligentes (LTI), Escola Politécnica (EP), Universidade de São Paulo (USP), São Paulo, Brazil xi
xii
Contributors
Frances Brazier Delft University of Technology, Delft, BX, The Netherlands Kevin Chapuis UMR 228 ESPACE-DEV, IRD, University of Montpellier, Montpellier, France Edmund Chattoe-Brown School of Media, Communication and Sociology, University of Leicester, Leicester, England Angelo L. C. Ciribini Department of Civil, Architectural, Environmental Engineering and Mathematics, University of Brescia, Brescia, Italy Dave Cliff Department of Computer Science, University of Bristol, Bristol, UK Sara Comai Department of Civil, Architectural, Environmental Engineering and Mathematics, University of Brescia, Brescia, Italy Tina Comes Delft University of Technology, Delft, BX, The Netherlands Stéphane Couture INRAE, University of Toulouse, MIAT, Castanet Tolosan, France Marcin Czupryna Cracow University of Economics, Kraków, Poland Diego Dametto Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Potsdam, Germany Neriane da Hora Centro de Desenvolvimento Sustentável-CDS, Universidade de Brasília, UnB, Brasília, Distrito Federal, Brazil Frank Dignum Department of Computing Science, Umeå University, Umeå, Sweden Martin Dolezal Department of Political Science, Paris Lodron University of Salzburg, Salzburg, Austria Bruce Edmonds Centre for Policy Modelling, Manchester Metropolitan University, Manchester, England Dominik Engel Centre for Secure Energy Informatics, Salzburg University of Applied Sciences, Puch, Salzburg, Austria Andreas Ernst Center for Environmental Systems Research (CESR), University of Kassel, Kassel, Germany Jôine Cariele Evangelista-Vale Centro de Desenvolvimento Sustentável-CDS, Universidade de Brasília, UnB, Brasília, Distrito Federal, Brazil Charlie Figuero Department of Computer Science, University of Bristol, Bristol, UK Tatiana Filatova Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
Contributors
xiii
Javanshir Fouladvand Technology, Policy and Management Faculty, Delft University of Technology (TU Delft), Delft, The Netherlands Marco Fölsch Department of Political Science, Paris Lodron University of Salzburg, Salzburg, Austria Sharon Geva Ben-Gurion University of the Negev, Be’er Sheva, Israel Amineh Ghorbani Technology, Policy and Management Faculty, Delft University of Technology (TU Delft), Delft, The Netherlands Shawn Graham Department of History, Carleton University, Ottawa, Canada Jan Gruca Johannes Gutenberg University Mainz, Mainz, Germany Reinhard Heinisch Department of Political Science, Paris Lodron University of Salzburg, Salzburg, Austria Frederick Herget Johannes Gutenberg University Mainz, Mainz, Germany Leonard Higi Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Potsdam, Germany Gert Jan Hofstede Information Technology Group, Social Sciences, Wageningen University, Wageningen, The Netherlands; North-West University, Potchefstroom, South Africa Elizabeth Hunter PRECISE4Q Predictive Modelling in Stroke, Technological University Dublin, Dublin, Ireland Maarten Jensen Department of Computing Science, Umeå University, Umeå, Sweden Subu Kandaswamy Indian Institute of Information Technology Sri City, Sri City, India John D. Kelleher PRECISE4Q Predictive Modelling in Stroke, Technological University Dublin, Dublin, Ireland; ADAPT Research Centre, Technological University Dublin, Dublin, Ireland Daniel Klaperski Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Potsdam, Germany Benedikt Kleppmann Johannes Gutenberg University Mainz, Mainz, Germany Mark R. Kramer Information Technology Group, Social Sciences, Wageningen University, Wageningen, The Netherlands Setsuya Kurahashi Graduate School of System Management, University of Tsukuba, Tokyo, Japan Francesco Lamperti Institute of Economics, Scuola Superiore Sant’ Anna, Pisa, Italy
xiv
Contributors
Justin E. Lane ALAN Analytics S.R.O, Bratislava, Slovakia Stephan Leitner Department of Management Control and Strategic Management, University of Klagenfurt, Klagenfurt, Austria Christophe Le Page UMR Sens, CIRAD, Montpellier, France Bryony L. McGarry PRECISE4Q Predictive Modelling in Stroke, Technological University Dublin, Dublin, Ireland; School of Psychological Science, University of Bristol, Bristol, UK Sri Sailesh Meegada Indian Institute of Information Technology Sri City, Sri City, India Gustavo Melo Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil Ruth Meyer Centre for Policy Modelling, Manchester Metropolitan University, Manchester, UK Antje Michel Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Potsdam, Germany Gabriela Michelini Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Potsdam, Germany Georg P. Mueller University of Fribourg, Fribourg, Switzerland David Cristobal Muñoz Information Technology Group, Wageningen University, Wageningen, The Netherlands
Social
Sciences,
Ivet Andès Muñoz Information Technology Group, Social Sciences, Wageningen University, Wageningen, The Netherlands Albert Mwanjesa Universiteit Utrecht, Utrecht, The Netherlands, Princetonplein 5, Utrecht, CC, The Netherlands Hideyuki Nagai Kyoto Arts and Crafts University, Kyoto, Japan Rob M. A. Nelissen Department of Social Psychology, Tilburg University, Tilburg, The Netherlands Vittorio Nespeca Delft University of Technology, Delft, BX, The Netherlands Martin Neumann Johannes Gutenberg University Mainz, Mainz, Germany Igor Nikolic Technology, Policy and Management Faculty, Delft University of Technology (TU Delft), Delft, The Netherlands Roger Normann Center for Modeling Social Systems, NORCE, Universitetsveien 19, Kristiansand, Norway Anna Pagani École Polytechnique Fédérale de Lausanne EPFL, Lausanne, Switzerland
Contributors
xv
Roy Popiolek Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Potsdam, Germany Andreas Prinz University of Agder, Grimstad, Norway Ivan Puga-Gonzalez University of Agder, Kristiansand, Norway; Center for Modeling Social Systems at NORCE, Kristiansand, Norway Alexandra Rausch University of Klagenfurt, Klagenfurt, Austria Patrick Reinwald Department of Management Control and Strategic Management, University of Klagenfurt, Klagenfurt, Austria Andrea Roventini Institute of Economics, Scuola Superiore Sant’ Anna, Pisa, Italy Loic Sadou INRAE, University of Toulouse, MIAT, Castanet Tolosan, France Tobias Schröder Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Potsdam, Germany Judith Schwarzer Centre for Secure Energy Informatics, Salzburg University of Applied Sciences, Puch, Salzburg, Austria Hyesop Shin School of Geographical and Earth Sciences, University of Glasgow, Glasgow, UK F. LeRon Shults University of Agder, Kristiansand, Norway; Center for Modeling Social Systems at NORCE, Kristiansand, Norway Jaime Simão Sichman Laboratório de Técnicas Inteligentes (LTI), Escola Politécnica (EP), Universidade de São Paulo (USP), São Paulo, Brazil Davide Simeone Agency – Agents in digital AEC, Rome, Italy John C. Stevenson Independent, Long Beach, NY, USA Timo Szczepanska Norwegian College of Fishery Science, UiT The Arctic University of Norway, Tromsø, Norway Alessandro Taberna Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands Patrick Taillandier IRD, Sorbonne University, UMI UMMISCO, Bondy, France; Thuyloi University, JEAI WARM, Hanoi, Vietnam Anne Tauch Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Potsdam, Germany Anne Templeton University of Edinburgh, Edinburgh, UK Rallou Thomopoulos University of Montpellier, INRAE, Institut Agro, IATE, Montpellier, France Onuralp Ulusoy Universiteit Utrecht, Utrecht, The Netherlands, Princetonplein 5, Utrecht, CC, The Netherlands
xvi
Contributors
Loïs Vanhée Department of Computing Science, Umeå University, Umeå, Sweden Silvia Mastrolembo Ventura Department of Civil, Architectural, Environmental Engineering and Mathematics, University of Brescia, Brescia, Italy Harko Verhagen Department of Computer and Systems Sciences, Stockholm University, Kista, Sweden Deline Verkerk Technology, Policy and Management Faculty, Delft University of Technology (TU Delft), Delft, The Netherlands; Institute of Environmental Sciences (CML), Leiden University, Leiden, The Netherlands Friederike Wall Department of Management Control and Strategic Management, University of Klagenfurt, Klagenfurt, Austria Nanda Wijermans Stockholm University, Stockholm, Sweden Themis Dimitra Xanthopoulou University of Agder, Grimstad, Norway Pınar Yolum Universiteit Utrecht, Utrecht, The Netherlands, Princetonplein 5, Utrecht, CC, The Netherlands
Sociality, Norms and Institutions
Efficient Redistribution of Scarce Resources Favours Hierarchies Rob M. A. Nelissen, Ivet Andès Muñoz, David Cristobal Muñoz, Mark R. Kramer, and Gert Jan Hofstede
Abstract Common views identify resource abundance as the cause for the emergence of hierarchy in societies. We investigated if hierarchy may also thrive as a mechanism of redistributing scarce and variable resources, mimicking conditions of ancestral, hunter-gatherer societies. To that end, we built an agent-based model in which we compared the relative success of a comprehensive range of redistribution strategies, derived from relational models theory (Fiske in Psychol Rev 99:689– 723, 1992) and explored how well populations of agents that adopt different rules for sharing resources thrive under different levels of resource availability, reflecting scarcer versus more abundant environments. Our results show that under most levels of resource availability, a population of agents that redistribute pooled resources according to individual differences in rank among the agents (i.e., reflecting an “Authority Ranking” model), was more sustainable than populations that adopted equal- and need-based sharing rules, as well as agents that did not share resources. Our results suggest that the dominant manifestation of hierarchical organization in society does not require surplus and may derive from its effectiveness in dealing with scarce resources at the group-level. Keywords Resource availability · Sociality · Relational models theory · Redistribution · Hierarchy · Agent-based models · Multi-agent systems
R. M. A. Nelissen (B) Department of Social Psychology, Tilburg University, Tilburg, The Netherlands e-mail: [email protected] I. A. Muñoz · D. C. Muñoz · M. R. Kramer · G. J. Hofstede Information Technology Group, Social Sciences, Wageningen University, Wageningen, The Netherlands G. J. Hofstede North-West University, Potchefstroom, South Africa © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_1
3
4
R. M. A. Nelissen et al.
1 Introduction According to Relational Models Theory [1, 2]; people use one of four distinct ways to coordinate their activities in all domains of social life. These ‘social scripts’ are known as Communal Sharing, Authority Ranking, Equality Matching, and Market Pricing. In Communal Sharing (CS) relationships, people treat others as equivalent and undifferentiated with respect to the social domain in question (e.g. family members at dinner). Authority Ranking (AR) relations are characterized by people having asymmetric positions in a linear hierarchy in which subordinates defer, while superiors take precedence but also take pastoral responsibility for subordinates (e.g., military hierarchies). AR relationships derive from perceptions of legitimate asymmetries and do not necessarily involve coercive power. In Equality Matching (EM), principles of equity (e.g. turn taking, reciprocity, and equal share distributions) govern social interactions, as is the case for instance in collaborative enterprises such as inheritance, professional activities, and academic group-assignments. Finally, in Market Pricing relationships, money is usually the medium that governs social interactions. Because this mode of distribution is predated by the others and did not emerge until the advent of currency, this type of relation is not explored further in the present study. Much empirical support attests to the ubiquity of these relational models [2]. Still, one of the key unanswered questions about the theory concerns the factors that cause cultural, group- and institutional level variations in the adoption of these models. Predictable associations do exist between the prevalence of certain relational models and cultural [3, 4] and organizational [5, 6] differences. However, such cross-societal variation may result from historical processes and individuals often do not have much leeway for choice once a substantial proportion of the population in a given context adopts a particular model. As a result, we are largely unaware of conditions that favour the selection and proliferation of relational models.
1.1 Resource Models as Means of Redistribution It seems plausible to assume that relational models are a form of adaptive social cognition that evolved because they solved distribution problems through coordinated social interaction, yet attempts at a more concrete specification of the nature of these problems are sparse. We propose that because the evolution of human sociality closely intertwines with our ancestral tendencies for food sharing [7, 8], the functions of the different relational models may well derive from the benefits of redistributing food and other resources. Redistribution of resources can be beneficial as a form of insurance against the risk of variation in success of procuring (through hunting, finding, producing, harvesting, etc.) food [9, 10]. Pooling and sharing food is a way to enable sufficient consumption
Efficient Redistribution of Scarce Resources Favours Hierarchies
5
for all individuals in a group even if some were unable to acquire a meal for themselves. Still, redistribution may follow different patterns according to the relational model that is adopted by the individuals in a population. The optimal way of redistribution probably depends on many factors, such as the size and the composition of the population, as well as on the nature of the type of food (or other resource) that is shared, and the larger social context (e.g., the possibility to trade or compete for resources with other groups). Yet, the core idea is that redistribution as a mechanism of insurance is ultimately a means of dealing with scarcity. Therefore, we started by investigating the influence of resource availability to determine conditions under which a group that adopts a certain pattern of redistribution is more successful (in terms of group size and survival) than a group that adopts a different pattern or than a number of individuals that do not share at all. In sum, the central research question is: How does resource availability (abundance versus scarcity) influence the success of different rules for redistributing resources?
1.2 Previous Research Experimental research supports the idea that people show a tendency to inhibit personal consumption to share resources that are scarce [11–13] or unpredictably available [14], indirectly indicating a causal influence of resource availability on resource sharing tendencies. Experimental research, however, cannot reveal whether redistribution is actually also a more viable long-term strategy compared to selfish appropriation. A recent, agent-based model (ABM) study evidenced that norm-driven sharing can in fact be a robust strategy when harvesting a common resource if the resource is sufficiently scarce and any deviations from the norm are punished [15]. Another study, also reporting the results of an ABM, revealed that under ecological conditions mimicking those of early hunter-gatherer societies, resource sharing is also robust to free riding without punishment if agents that share and are therefore vulnerable to free riders, have the opportunity to switch groups [16]. Although these earlier modelling studies reveal that food sharing can be a viable strategy and is beneficial particularly if resources are sufficiently scarce, they have only looked at redistribution strategies by which every individual gets an equal share (i.e., an EM strategy in our model), and have not investigated the relative success of different redistribution strategies that correspond with different relational models. Conceptualizing relational models as scripts for the redistribution of shared resources in a population, we investigated how well populations of agents that adopt different rules for sharing resources thrive (in terms of group size) under different levels of resource availability, reflecting scarcer versus more abundant environments.
6
R. M. A. Nelissen et al.
2 Methods 2.1 Model Design An agent-based model was built with the explicit purpose of investigating the success of different redistribution strategies, related to each of the three relational models under investigation (CS, AR, and EM), as a function of varying levels of resource availability in the environment. This model distinguishes four types of agents: three corresponding to each of the three relational models and a fourth populated by agents that do not share food (IA, “individual agents”). Upon model initialization, agents of these four types are uniformly randomly distributed in the environment and given a random age and unique resource need from a random normal distribution around the average need. Resource availability is varied according to a slider which determines the number of resources generated upon initialization as well as the number regenerated each year (where one year is equal to 3000 time steps). These are also distributed randomly in the environment. In each round of the model, agents move forward in the direction they are facing. If they come across a resource, they harvest it, which removes the resource from their environment. After this, agents scan their environment for nearby resources and, if they identify a resource within their radius of awareness, they turn to face it. If there are no resources within the radius of awareness, a random angle is added to their direction. Subsequently, agents pool their resources. This means that if they belong to a relational model, they place it in the resource pool of their group. If they are individual agents, they immediately consume any resources that were harvested. The pooled resources are then redistributed according to the distribution rules derived from Relational Models theory (see below). At the end of the year, agents die if they cannot consume sufficient resources to match their resource-need. Agents also die if their age exceeds 80 years. If agents consumed resources beyond their need in that year, the excess resources determine their chance of reproduction. The chance of reproduction is taken to vary between 1 and 3%, which is consistent with global birth rates [17]. Agents can only reproduce once per annual cycle. If they do, offspring inherit resource need and approximate ranking from their parents. As stated, the distribution of resources and thus consumption of resources by individuals of each group depends upon the redistribution rules of that group such that: CS agents redistribute pooled resources in accordance with each agent’s need. The difference between resource need and the resources consumed up to that point is determined for each agent and subsequently, resource are allocated one at a time to the agent with the most ‘pressing’ need (largest difference). In other words, an agent with a bigger unfulfilled need takes precedence over one who has almost fulfilled their need. In case of excess, the same rule applies and those with least surplus are allocated extra resources first.
Efficient Redistribution of Scarce Resources Favours Hierarchies
7
EM agents redistribute all pooled resources equally. In case the amount of resources is insufficient to be distributed equally among all agents, a random draw selects the desired number of agents from the group who are then each allocated one of these resources. AR agents redistribute pooled resources in accordance with the rank value they are initially assigned, which is randomly determined from a value set with a gamma distribution. First, the highest ranked agent is allocated resources until its needs have been fulfilled. Next, the second-highest-ranking agent consumes resources from the pool until her needs have been fulfilled, and so on until the pool is empty. If any, excess resources left in the pool after covering needs, are distributed one at a time, in order of ranking. NS (non-sharing) agents do not redistribute.
2.2 Simulations, Model Parameters, and Variables Agent-specific variables include age, resource need and awareness. Age is initialized as a random number uniformly drawn from zero to the maximum lifespan of the agents. The maximum lifespan of the agents is set to 80 to approximate real human lives. The time steps per year parameter is set to 3000, as this gave the agents time to collect more or less one resource per agent per year and gives them the opportunity to fulfil their needs. A lifespan would therefore consist of 240,000 time steps. To run a multi-generational trial, the model would therefore have to run on the order of hundreds of thousands of time steps. The resource need of each individual agent is set according to a random normal distribution with a mean of 1 and a standard deviation of 0.5. The awareness of all agents is set to 3 to allow agents to head towards resources if they are very close, but otherwise move randomly. Baseline parameter settings were chosen to achieve a situation in which changing resource number within a reasonable range shows significant results in distribution strategy success. While we acknowledge the opportunistic nature of this choice of parameter settings, we feel it is justified by the aim of our study. That is, if we want to see under what conditions different relational models thrive, we first need to find a parameter space in which they thrive at all otherwise the model is useless in delivering on its purpose. For example, if agents’ resource need was too high, populations would have no chance of surviving, so changing abundance of resources would have no impact. All simulations started with 80 agents (20 of each type), and compared the relative success of each population in terms of population size, for a total duration of 240,000 time steps, representing 80 years in our models. Population size for each agent type was compared across different levels of resource availability (50, 100, 150, 200, 250, 300, 350) while keeping all other model parameters at constant levels (time steps per year = 3000, resource need = 1, standard deviation of the need = 0.5, awareness level = 3).
8
R. M. A. Nelissen et al.
Each model was run 10 times at each level of resource availability through Behaviour Space in NetLogo [18], stopping each run after 24,000 time steps, which equalled 8 years in the model. The resulting data frame was processed using Python packages (pandas, matplotlib, pandas Gui) to obtain the statistics of each run using the average number of agents in each population as indicator of success.
3 Results The simulation trials show that AR outperformed the other redistribution models in terms of population growth (Fig. 1). The robustness of this effect was investigated by comparing the average proportion of agents of different types at different levels of resource availability (Fig. 2). These results again show that the populations of AR agents are most successful, particularly at low resource availability, although the pattern is slightly less pronounced than in the examples in Fig. 1 due to the shorter runs (8 years rather than 80) of the models to produce aggregated results.
Fig. 1 Examples of the output from single model runs at different levels of resource availability (from left to right and top to bottom: 50, 100, 150, 200, 250, 300, 350, 400), plotting population fluctuations for the different agents over one generation (80 years, 240.000 time steps)
Efficient Redistribution of Scarce Resources Favours Hierarchies
9
Fig. 2 Average proportions of agents of different types at different levels of resource availability after 8 years (24.000 time steps) of model runs
To further validate the model, a sensitivity analysis was carried out, taking into account two other parameters (awareness and average need). This resulted in a hierarchy of effects with the number of resources available being the most impactful parameter for the model.
4 Discussion 4.1 Present Findings Conceptualizing relational models as scripts for the redistribution of shared resources in a population, we addressed the question of how to explain differences in the prevalence of relational models by investigating how well populations of agents that adopt different rules for sharing resources thrive under different levels of resource availability, reflecting scarcer versus environments that are more abundant. Our results suggest that under most levels of resource availability, a population of agents that redistribute pooled resources according to differences in rank of individual agents (i.e., reflecting an AR model), outperformed populations that adopted different redistribution rules. In fact, neither an equal division rule (reflecting an EM model), nor a need-dependent redistribution rule (reflecting a CS model) was sustainable, except at very high levels of resource availability. Agents that did not engage in any form of redistribution also prevailed only at high levels of resource availability. Yet even if resources were abundant, the population of AR agents often was the most successful. Offering a tentative answer as to why AR is such an effective means of redistribution, we suggest that a rank-based redistribution mechanism is a way to guarantee
10
R. M. A. Nelissen et al.
that at least some of the agents get to consume, which is particularly important if resources are scarce. By prioritizing consumption to agents of a higher rank, populations of AR agents thus prevent starvation of the entire population. It is interesting to note that this mechanism was efficient even though rank in our model was arbitrary determined. That is, rank did not reflect any individual qualities that could benefit the group—such as more efficient harvesting skills in the case of our model—although that usually is the case in natural populations [19]. If we were to model rank such that it would become dependent upon individual differences in harvesting skills, we expect that AR would become an even more successful means of redistribution. Agents that adhere to an EM rule for redistribution do not prioritize the allocation of resources. Therefore, they wither under scarce resource conditions because the few resources that can be harvested are diluted too much, with no agents able to meet their resource needs. An interesting side effect is that a form of natural selection occurs within the EM model in which those with lowest need survive; this makes it slightly more sustainable at scarce resource availability than CS. Just like AR, CS is also a means of prioritizing agents, namely those that have the highest need. This, however, makes communal sharing a wasteful means of redistributing resources at the population level, because resources go to those agents that already have more difficulty surviving because of their high need. It will therefore limit the reproductive potential of the population as a whole. This is a self-defeating strategy, especially if resources are scarce, and may only become sustainable if resources are very abundant. Finally, by not sharing resources at all, individualists cannot benefit from spreading the risk of individual variation in harvesting success, which is the key benefit of redistribution. As a result, the population is decimated as time proceeds because harvesting success is essentially a stochastic process. While the outcomes were robust for low levels of resource availability, outcomes under abundant conditions (approximately at levels of resource availability >200) were more stochastic and different populations of agents emerged as the most successful in different model runs of the model. This is likely dependent on initial differences in the growth rate of the different populations. Since all agents harvest from the same pool, a faster initial population growth rate relative to that of the other populations, means that more resources can be harvested in subsequent rounds, ensuring further population growth. So, differences in harvesting success in initial rounds of the model, which are of course highly dependent upon the random allocation of agents and resources in the world, will create a positive feedback loop randomly favouring one population over the others. Still, this only happened if resources were so abundant that all models of redistribution were sufficiently effective to sustain a population of agents.
Efficient Redistribution of Scarce Resources Favours Hierarchies
11
4.2 Model Validation: Intricacies in the Relations Between Resources and Redistribution Rules The success of AR as revealed by our model may at first sight seem to be contradicted by ethnographic observation showing that hunter-gatherer societies are often very egalitarian and show patterns of food sharing that are more in line with CS or EM rules for redistributing resources [9, 20]. We therefore want to stress that the results from our model do not mean that other modes of redistribution cannot be sustainable under different conditions. The composition of the population of individuals among whom food is shared appears important to determine the success of CS norms for redistribution, which are common among small, kin-based groups, in which shortterm (e.g., due to injuries and other physical conditions such as pregnancy) and intergenerational fluctuations in resource need and productivity are prevalent [8]. EM norms for redistribution are common if food production is difficult, hard to monopolize and yields diminishing marginal returns to consumption, as is the case for large game hunting [9, 21]. In such instances, sharing excess gains is not only relatively cheap; it is also a means of payment for joint effort and thus ensures mutual aid in days to come. Obviously, this pays off only if there is irregular excess that requires a joint effort, so when resources are scarce but also of very high yield. Model refinements could introduce such factors by having individual needs fluctuate across different rounds of the model and/or by varying not only the resource availability but also the energy provided by the resource. It should further be noted that we operationalized success strictly in terms of population size. Another way to operationalize the success of different modes of redistribution is through longevity. Indeed, all populations tended to partly collapse and go through cycles of growth. (Incidentally, AR agents were again growing faster after these instances of population collapse.) This is due to the fact that they grow to the point of exploitation where the resources cannot sustain the population anymore at which there is a huge drop. Demographic statistics suggest that natural populations avert this through reduced birth rates or migration when resources become more scarce [22]. Such an adjustment of birth rate was not possible for the agents in our model. Interestingly, resource sharing may present yet another way to prevent resource depletion. Generally, with any form of redistribution, the better skilled (or luckier) individuals support the weaker (or unluckier) ones, which may either attenuate the evolution of efficient harvesting skills or alternatively, crowd-out the motivation to harvest among the better skilled individuals. It is proposed that this can be an underlying adaptive function of redistribution and that sharing rules in hunter-gatherer societies are an evolutionary response to the dynamics of their physical environment [23]. Sharing rules in this way are interpreted as an implicit tax on the harvest of renewable resources, the proceeds of which are redistributed equally among all members of the community. The implicit tax lowers the marginal return to resource harvesting, which potentially reduces effort and efficiency and increases the resource stock. In that way, resource sharing could potentially be adaptive at the group level as
12
R. M. A. Nelissen et al.
a mechanism not only to avoid the risk of variation in harvesting success but also to prevent commons dilemmas. Further refinements of the model that would allow for the selection of harvesting skills or the attenuation of harvesting motivation, would be required to test this assumption.
5 Conclusion A hierarchical system clearly outperformed equal and need-based systems as the most successful means of redistributing resources across most levels of resource availability in our model. This result may seem surprising, as equality appears to be the core value governing the ethics in many post-industrialized societies [24]. Indeed, psychological studies attest to the widespread individual preference for equal societies [25]. In that sense, the predominance of hierarchy over other means of redistribution can be perceived as an emergent property of societal organization: Even though it does not coincide with the preferences of the majority, its’ dominant manifestation in society derives from its’ effectiveness in efficiently dealing with scarce resources at the group-level.
References 1. Fiske, A.P.: The four elementary forms of sociality: framework for a unified theory of social relations. Psychol. Rev. 99(4), 689–723 (1992) 2. Haslam, N. (ed.): Relational models theory: a contemporary overview. Lawrence Erlbaum, Mahwah NJ (2004) 3. Hofstede, G. J., Frantz, C., Scholz, G., Schröder, T.: Artificial sociality manifesto. Unpublished manuscript (2021) 4. Vodosek, M.: The relationship between relational models and individualism and collectivism: evidence from culturally diverse work groups. Int. J. Psychol. 44(2), 120–128 (2009) 5. Boer, N.I., Berends, H., Van Baalen, P.: Relational models for knowledge sharing behavior. Eur. Manag. J. 29(2), 85–97 (2011) 6. Wellman, N.: Authority or community? A relational models theory of group-level leadership emergence. Acad. Manag. Rev. 42(4), 596–617 (2017) 7. Fried, M.H.: The evolution of political society. Random House, New York (1967) 8. Kaplan, H., Hill, K., Lancaster, J., Hurtado, A.M.: A theory of human life history evolution: diet, intelligence, and longevity. Evolut. Anthropol.: Issues, News Rev. 9(4), 156–185 (2000) 9. Kaplan, H., Gurven, M.: The natural history of human food sharing and cooperation: a review and a new multi-individual approach to the negotiation of norms. In: Gintis, H., Bowles, S., Boyd, R., Fehr, E. (Eds.). Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life. MIT Press, Cambridge (MA) (2005) 10. Winterhalder, B.: Social foraging and the behavioral ecology of intragroup resource transfers. Evolut. Anthropol.: Issues, News Rev. 5(2), 46–57 (1996) 11. Effron, D.A., Miller, D.T.: Diffusion of entitlement: an inhibitory effect of scarcity on consumption. J. Exp. Soc. Psychol. 47, 378–383 (2011) 12. Van Dijk, E., Wilke, H.: Coordination rules in asymmetric social dilemmas. J. Exp. Soc. Psychol. 31, 1–27 (1995)
Efficient Redistribution of Scarce Resources Favours Hierarchies
13
13. Osés-Eraso, N., Vildarich-Grau, M.: Appropriation and concern for resource scarcity in the commons: an experimental study. Ecol. Econ. 63, 435–445 (2007) 14. Kaplan, H.S., Schniter, E., Smith, V.L., Wilson, B.J.: Risk and the evolution of human exchange. Proceed. Royal Soc. B: Biol. Sci. 279(1740), 2930–2935 (2012) 15. Schlüter, M., Tavoni, A., Levin, S.: Robustness of norm-driven cooperation in the commons. Proceed. Royal Acad. Sci., B. 283, 2015243 (2016) 16. Lewis, H.M., Vinicius, L., Strods, J., Mace, R., Migliano, A.B.: High mobility explains demand sharing and enforced cooperation in egalitarian hunter-gatherers. Nat. Commun. 5(1), 1–8 (2014) 17. The World Bank.: Fertility rate, total (births per woman) (2019). https://data.worldbank.org/ indicator/SP.DYN.TFRT.IN. Accessed 30 April 2021 18. Wilensky, U.: NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1999) 19. Von Rueden, C., Gurven, M., Kaplan, H.: Why do men seek status? Fitness payoffs to dominance and prestige. Proceed. Royal Soc. B: Biol. Sci. 278(1715), 2223–2232 (2011) 20. Gurven, M.: To give and to give not: the behavioral ecology of human food transfers. Behav. Brain Sci. 27(4), 543 (2004) 21. Hawkes, K., Hill, K., O’Connell, J.F.: Why hunters gather: optimal foraging and the Ache of eastern Paraguay. Am. Ethnol. 9(2), 379–398 (1982) 22. Sear, R., Lawson, D. W., Kaplan, H., Shenk, M. K.: Understanding variation in human fertility: what can we learn from evolutionary demography?. Philosophical transactions of the Royal Society of London. Series B, Biological sciences 371(1692), 20150144 (2016) 23. Kägi, W.: The tragedy of the commons revisited: Sharing as a means to avoid environmental ruin. IWOE Discussion Paper No. 91 (2001) 24. Inglehart, R.F.: Changing values among western publics from 1970 to 2006. West Eur. Polit. 31(1–2), 130–146 (2008) 25. Norton, M.I., Ariely, D.: Building a better America—One wealth quintile at a time. Perspect. Psychol. Sci. 6(1), 9–12 (2011)
Evaluation of COVID-19 Infection Prevention Measures Compatible with Local Economy Hideyuki Nagai and Setsuya Kurahashi
Abstract This study simulates the infection spread process of 2019 novel coronavirus diseases (COVID-19) in a tourism location by agent-based model and evaluates the effects of multiple infection prevention measures. In this model, a continuous influx of tourists brings about an infection spread among regional residents living in daily life there. The experiments’ results showed that there are certain effects from measures to reduce human-to-human contact, but the effects are limited. On the other hand, we confirmed that regular PCR testing for tourism business employees and an active epidemiological investigation is effective. Keywords COVID-19 · Agent-based simulation · Tourism · Policy science
1 Introduction On October 1, 2020, in view of the governmental update that the situation of COVID19 infections in Japan was under control, the “Go To Travel” campaign was executed in full with the aim of economic stimulation in tourism areas by promoting consumption [6]. After that, it was not until the arrival of the worst third wave ever is clarified at the end of December that the tourism promotion established by the directive to “make Japan better through travel” had been not completely canceled. Elsewhere, preliminary calculations were announced stating that the GDP for the full year of 2020 had decreased by 4.8% because of COVID-19 and that economic losses were expected to reach 30 trillion yen [15]. The impact on the tourism industry, in particular, is far-reaching, including not only travel agencies and accommodation businesses but also land, sea, and air transport; the restaurant industry; and consumer goods businesses, which are crucial to the economies of many regions. For example, a preH. Nagai (B) Kyoto Arts and Crafts University, Kyoto, Japan e-mail: [email protected] S. Kurahashi Graduate School of System Management, University of Tsukuba, Tokyo, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_2
15
16
H. Nagai and S. Kurahashi
liminary calculation of economic losses in Okinawa due to the reduction in tourists was 186.7 billion yen for the February-May period [17]. Total domestic travel and tourism expenditure in 2019 including inbound tourism was 27.9 trillion yen [7], so there were serious concerns that the “evaporation” of the tourism demand supporting regional economies would cause critical disruption thereof. Therefore, this study models the spread of COVID-19 infection in tourism locations and compares the effects of hypothetical prevention measures to find feasible, effective, astute infection prevention measures with consideration for the effects both on those directly involved in the target regions and on others. This study does not discuss the superiority or inferiority between a bottom-up approach in terms of personal behaviour such as avoiding the three Cs (closed spaces, crowded places, and close-contact settings) and voluntarily restricting movement and a top-down approach in terms of strong government restrictions such as the declaration of a state of emergency. Rather, this study examines the idea of infection prevention measures that enable the continuous growth of the entire region.
1.1 COVID-19 Agent-Based Simulations Even with the negative impacts of COVID-19 infections around the world, we have acquired greater expertise. Accordingly, several researchers are working on modeling social systems that include non-linear interaction to create simulations for policymaking support toward the prediction of infection expansion in the future, which is almost impossible by intuition alone, to resolve the situation. Agent-based models excel at the manifestation of effects through micro-level behavioural changes among individual citizens as specific intervention measures against infection, as well as at operability based on intervention scenario finding. Therefore, they are used with existing infectious diseases such as smallpox [2, 16], measles [5, 13], Zika fever [8], Ebola hemorrhagic fever [8, 9], and rubella [10]. As for the proposed COVID19 simulations, most are based on macro-scale mathematical models including the studies, but there are also some interesting agent-based models. Ferguson et al. (2020) reports that non-medical intervention such as wider social distancing, home isolation, and home quarantine throughout the U.K. and the U.S. may mitigate the spread of the infection to some degree, but as long as there is no prevention system such as a vaccine or antiviral drug, pressure on medical resources is unavoidable, and large numbers of fatalities are likely [3]. Based on this report, the U.K. government shifted immediately from mass immunization strategy to strict intervention measures to ensure social distancing. Silva et al. [18] simulated not only the epidemiological dynamics but also an estimation of the economic effect of various intervention scenarios with regard to ensuring social distancing and demonstrated that where a lockdown is unfeasible because of the scale of economic impact, a combination of the use of face masks and partial isolation is more realistic [18]. Aleta et al. [1] constructed an agent-based model based on census research and movement data in the greater Boston area and demonstrated that by means of testing, contact tracing, and home quarantine after a
Evaluation of COVID-19 Infection Prevention …
17
period of strict social distancing, it was possible to resume economic activity while protecting the health system [1].
1.2 Summary of Related Studies and Positioning of This Study These studies demonstrate that the use of an agent-based model is possible but do not sufficiently verify the effects of non-medical interventions with attention to the heterogeneity of residents’ daily lives and their close relations to these interventions when a vaccine or antiviral drug is not available. Further, there are no spatial or temporal estimates of regional characteristics with regard to countermeasures and their effects on parts of tourism locations, for instance, that receive intermittent influxes of people from other regions. Therefore, this study explores feasible, effective nonmedical infection prevention measures using a COVID-19 agent-based model with the assumption of specific tourism locations.
2 COVID-19 Infection Model for Tourism Locations As an expansion of existing infectious disease studies in which validity evaluations have been conducted for the transition of infection—namely an Ebola hemorrhagic fever model [9] and a rubella model [10]—a COVID-19 model [11] was constructed for a tourism location in Nagano Prefecture. The model uses restored population data created with the household composition restoration method [4], a method of restoring population data so that it conforms to various published statistics (e.g., national census, demographics, business/industry statistics, etc.), which is optimized using simulated annealing (SA) with the errors in the recreated data (restored data) collation after calculation as the objective function. This restored population data includes the longitude and latitude of the location of the household and its members’ gender, age, employment status, type of industry, scale of business, etc. Using this data, Fig. 1 shows the population distribution of the target town. With attention to place names, terrain, road connections, school district divisions, etc., the town is divided into nine zones as shown in the figure. The central area is traversed by a local railroad, with residential areas distributed next to the line along with scattered holiday home areas. The town has three local railway stations, with the areas from the left facing these stations—the West Station, Central Station, and East Station. Public facilities such as government buildings, schools, and hospitals are concentrated around the Central and East Station areas. The East Station area, where the local railroad meets the main line railroad, has several promenades, shopping districts, lodging facilities, shopping malls, etc., which draw in many tourists.
18
H. Nagai and S. Kurahashi
Fig. 1 Population distribution of the town
The total population based on the restored population data is approximately 17,000 people, but for ease of calculation, it was rounded down to approximately 1/5 in the model. However, the ratios of household composition, number of households, population per zone, etc. were set according to the actual population composition.
2.1 Behaviour of Citizen Resident agents in the model who commuted to work or school were set based on restored population data, municipal public information regarding public facilities, tourism guides, etc. 69% of young people—that is, 19% of the total population (520 agents)—attended childcare facilities or school. There were four childcare facilities, three elementary
Evaluation of COVID-19 Infection Prevention … Table 1 Employment locations of workers Employment locations Hospitality industry Local shop × 6 Tourism spot × 3 Shopping mall Hotel × 3 Night spot Education Childcare facility × 4 Elementary school × 3 Junior high school Senior high school × 2 (inside and outside town) Hospital Other employment locations (inside town) Other employment locations (outside town)
19
Ratio (%) 25
2
3 35 35
schools, and one junior high school in the town, as well as two senior high schools either inside or outside the town. 70% of the remaining young people, 80% of adults, and 30% of the elderly—that is, 52% of the total population—were workers. This corresponds to a 51.9% total employment ratio in the restored population data. Table 1 shows the employment locations of workers. The area is a popular tourist location, and the ratio of employees at wholesale or retail businesses, accommodation, or food service businesses and daily life-related services and entertainment businesses was 46%, almost half of the total. Reflecting this, the ratio of workers in actual customer-facing roles was set at 25%, just over half. These employees in the hospitality industry are considered to have contact with tourists, the main topic of this study. In the hospitality industry, local shops and tourism spots are established near West, Central, and East stations in the model area, with their employees set as residents of nearby areas. A shopping mall, accommodation facilities, and night spots were established near East Station, with their employees set as living across town. Figure 2 shows the household and facility distribution in the model space. Some of the resident agents go shopping at local stores adjacent to their residential zones after work or school. After shopping, all resident agents other than hospital inpatients go home. Figure 3 shows the flowchart of this series of behaviour for resident agents in the model. The simulation model defines the completion of this series of behaviour for all resident agents as one day. Resident agents repeat this series of behaviour throughout the run of the simulation.
20
Fig. 2 Household and facility distribution
Fig. 3 Flowchart of daily life of resident agents
H. Nagai and S. Kurahashi
Evaluation of COVID-19 Infection Prevention …
21
2.2 Progress of Infection and Symptoms In each round of simulations, infection was modeled on localized interaction among resident agents. Resident agents are activated sequentially in random order, and they move explicitly spatially on the model according to the flowchart of Fig. 3. In the rounds surrounded by the double line in Fig. 3, in the case of contact with other infections resident agents on the model plane, the contact ratio cr is generated probabilistically because of interaction, with infection occurring in line with the transmission ratio tr . The infection ratio ir , the probability of the occurrence of infection, was defined as follows. ir = cr ∗ tr
(1)
Based on the reports of detailed analyses of the infection spread of COVID19 [20, 22], the following process of the progress of symptoms was defined. The incubation period is 5 days following infection, but the person can infect others by the third day even during this period. On the sixth day, when the incubation period has ended, symptoms such as fever, coughing, and diarrhea occur in most infected people. After the fever, the basic scenario included a 50% probability of home isolation after visiting a doctor. This probability of visiting a doctor is set to 50% because the number of symptomatic infected people is drastically lower than the actual number of infected people including asymptomatic carriers. The remaining 50% of infected people are either essentially asymptomatic or have minor symptoms, so they continue to go to work or school while self-medicating with febrifuges, etc. After the symptoms have continued for 4 days or more, infected people see a doctor and undergo a PCR test with the results confirmed the following day, leading to hospitalization if the results are positive. Further, 20 days after infection, 20% of infected people become seriously ill and are hospitalized even without having seen a doctor in advance. Also, by 41 days after infection of those hospitalized with serious symptoms, fatalities comprise 0.06% of young people, 0.21% of adults, and 1.79% of the elderly. The mildly ill recover by 27 days after infection and the surviving seriously ill by 49 days after infection, achieving temporary immunity.
3 Estimating Effects of Infection Prevention Measures For this model, simulation scenarios for infection prevention measures for the entire region were set, including the hospitality industries that may be used by infected tourists. In Scenario B0, tourists are not accepted by the hospitality industry overall, and there is no influx of infections, but one resident is infected at the initial point. In Scenario B1, tourists are accepted by the hospitality industry overall, and one infected person per week enters. In Scenario S1, contact with tourists by hospitality industry workers is reduced by 50%; in Scenario S2 nightlife districts close down
22
H. Nagai and S. Kurahashi
voluntarily; in Scenario S3 contact with tourists by hospitality industry workers is reduced by 25%; and in Scenario S4 contact with tourists by hospitality industry workers is reduced by 25%, nightlife districts close voluntarily, and those testing positive are isolated at a treatment accommodation facility. In scenarios S5-S10, contact between hospitality industry workers and tourists is reduced by 25%, workers undergo regular PCR testing, and those testing positive are isolated at a treatment accommodation facility. In scenarios S11-S14, contact between hospitality industry workers and tourists is reduced by 25%, forward tracking (once or twice) of persons in close contact and PCR tests are implemented, and those testing positive are isolated at a treatment accommodation facility. Here the first forward tracking of persons in close contact with the objective of preventing further infection expansion is implemented by tracking tests for persons in post-infection close contact with people who have tested positive for infection, while the second forward tracking involves further tracking tests for persons in post-infection close contact with people testing positive for infection who were identified in the forward tracking testing. The tracking ratio was defined as the probability that the public health sector finds persons in close contact and the infection source. In scenarios S15 and S16, contact between hospitality industry workers and tourists is reduced by 25%, forward and backtracking of persons in close contact and PCR tests are implemented, and those testing positive are isolated at a treatment accommodation facility. Here, backtracking of persons in close contact, with the objective of identifying the source of infection, is tracking tests for persons in pre-infection close contact with people testing positive for infection. Table 2 shows the detailed settings.
4 Experiment Results Each simulation scenario was implemented 100 times. The infection prevention measures were evaluated by the average of the peak number of people hospitalized with serious symptoms, which is the major serious impact on medical resources. The results are shown in Fig. 4. The effects of the infection prevention measures endorsed as the new normal and new travel etiquettes (scenarios S1-S4) were, in comparison with the peak number of patients hospitalized with serious symptoms with canceling tourism (Scenario B0), 212% if tourists are accepted without countermeasures (Scenario B1); 183% with voluntary closures of nightlife districts (Scenario S2); 167% with thorough reduction in contact among workers (Scenario S3); and 159% with a combination of voluntary closures of nightlife districts, thorough reduction in contact, and isolation of infected people (Scenario S4). Next, in addition to thorough reduction in contact and the isolation of infected people, the effects of composite prevention measures also including regular prioritized virus tests for workers in contact with tourists (scenarios S5-S10) were, in comparison with Scenario B0, 149% with a 50% test ratio every 2 weeks (Scenario
Evaluation of COVID-19 Infection Prevention …
23
Table 2 Simulation scenarios for infection prevention measures Scenario
Tourists
Infection influx
Hotels (%)
Night spots (%)
Tourism Mall spots (%) (%)
Isolation
Testing/tracking system
B0
Not accepted
One resident
–
–
–
–
–
B1
Accepted
One per week
100
100
100
100
–
–
S1
Accepted
One per week
50
50
50
50
–
–
S2
Accepted
One per week
100
Close
100
100
–
–
S3
Accepted
One per week
25
25
25
25
–
–
S4
Accepted
One per week
25
Close
25
25
Yes
–
S5
Accepted
One per week
25
25
25
25
Yes
Every 2 weeks, 50%
S6
Accepted
One per week
25
25
25
25
Yes
Every 2 weeks, 75%
S7
Accepted
One per week
25
25
25
25
Yes
Every 2 weeks, 100%
S8
Accepted
One per week
25
25
25
25
Yes
Every 5 days, 50%
S9
Accepted
One per week
25
25
25
25
Yes
Every 5 days, 75%
S10
Accepted
One per week
25
25
25
25
Yes
Every 5 days, 100%
S11
Accepted
One per week
25
25
25
25
Yes
Forward (once), 50%
S12
Accepted
One per week
25
25
25
25
Yes
Forward (once), 80%
S13
Accepted
One per week
25
25
25
25
Yes
Forward (twice), 50%
S14
Accepted
One per week
25
25
25
25
Yes
Forward (twice), 80%
S15
Accepted
One per week
25
25
25
25
Yes
Forward (twice) and back, 50%
S16
Accepted
One per week
25
25
25
25
Yes
Forward (twice) and back, 80%
–
S5), 138% with a 75% test ratio every 2 weeks (Scenario S6), and 128% with a 100% test ratio every 2 weeks (Scenario S7). For tests conducted every 5 days, it was 128% with a 50% test ratio every 5 days (Scenario S8), 103% with a 75% test ratio every 5 days (Scenario S9), and 99% with a 100% test ratio every 5 days (Scenario S10). In addition to thorough reduction in contact and the isolation of infected people, the effects of composite prevention measures also including the implementation of tracking tests for persons in close contact (scenarios S11-S16) were, in comparison with Scenario B0, 133% with a forward one-time tracking ratio of 50% (Scenario S11), 117% with a forward two-time tracking ratio of 50% (Scenario S13), 106% with a forward two-time tracking ratio of 80% (Scenario S14), and 59% with a forward two-time and back one-time tracking ratio of 80% (Scenario S16).
24
H. Nagai and S. Kurahashi
Fig. 4 Comparison of the effectiveness of preventive measures
5 Discussion The experiment results demonstrate that there are limited effects even when changes are made to tourist lifestyles or to hospitality business service methods in regions such as tourism locations that are intermittently visited by infected people. The discovery of infected people by means of substantial PCR testing at the regional level is expected to have an effect, but uniform testing for all regional residents is limited, so the effects of regular PCR tests were evaluated with regard to workers in contact with tourists in commercial stores, tourism spots, accommodation facilities, and entertainment districts. As a result, with a test ratio of 75% every 5 days, it was found that the effect on infection control was almost the same as that of prohibiting tourism. In Japan, the maximum testing capacity per day for PCR tests as of May 2021 is approximately 0.16% (203,477 tests [14] out of 126.5 million people), which is far below the testing standard mentioned above. For example, Daegu City (approximately 2.4 million people), South Korea, established a testing system for up to approximately 7,000 (0.3% of the population) per day [19] in the epidemic before the pandemic and served it to resolve the situation. After that, the city has established the system over the whole area to prepare for a new epidemic immediately [12]. However, even with this, there is a large gap from the testing standard mentioned above. For prospective surveys of persons in post-infection close contact with those testing positive for infection and the implementation of tracking tests, almost the same effects are forecast with a two-stage 70% tracking ratio. Also, PCR test numbers in tracking surveys can limit the figure to around 1/10-1/100 in comparison with regular PCR testing for all workers in the hospitality industry. Furthermore, for retrospective surveys not only for persons in post-infection close contact with those testing positive for infection but also in the areas around the people who are the source of infection
Evaluation of COVID-19 Infection Prevention …
25
and the implementation of tracking tests, even in comparison with not accepting tourists, hospitalized patients can be kept under 60%. However, such surveys require the construction of systems that enable large-scale information collection and processing across large areas and over a long period of time. Therefore, the capacity is limited when relying only on the efforts of public health center workers, for example, and if infection expansion continues and the labor required for the survey expands likewise, the survey system and even the medical system are at risk of collapse. Therefore, the construction of comprehensive survey systems that utilize IT, including tracking persons in close contact by means of mobile phones, etc., is expected to have a major effect on the prevention of increased infection. To this end, in addition to designing a top-down system, the key is to increase the users of contact tracing apps, etc. to contribute to a bottom-up system. The experiment results indicate that the major factors include the delay between positive confirmation and app registration, in particular, and the delay between app notification and PCR testing and the ratio of the latter, as well as compliance with home isolation while waiting for PCR test results. As targets for infection prevention, from now on, the app usage ratio should be 80% both in the region and among visitors, and the delays between positive confirmation, app registration, and notifying persons in close contact should each be one day or less. In Japan, there are major barriers to access to tests for residents [21], so sufficient testing systems have not been constructed for the implementation of any of measures described above. Regardless of whether or not the region is a characteristic tourism location, for residents who want to be tested after becoming aware that they may have been infected or that they may infect others, testing is to be implemented without delay, allowing for subclinical cases, to identify infected people. Through the construction of a system with bottom-up aspects of this kind, for the first time, it will be possible to accurately grasp the infection situation, the highest priority for public health. This pandemic is still full of uncertainties regarding, for example, the development and distribution schedule for a vaccine, the mutation of the virus, and the pathology of after-effects. Therefore, it is desirable to invest pertinent resources promptly to minimize the damage to citizens’ health and to the economy to the extent possible.
6 Conclusion With the objective of evaluating the COVID-19 infection prevention measures in tourism locations, this study constructed simulation models in imitation of specific tourism locations. Further, as part of public health policy to prevent and control infection, analyses of tourist contact reduction measures, regular PCR testing for tourism business employees, and prospective tracking tests for people in close contact with those testing positive for infection were conducted. As a result of the simulated experiments, while there are certain effects from measures to reduce contact, it was found that the effects are limited in the case of a continuous influx of
26
H. Nagai and S. Kurahashi
infected people to tourism locations. While major effects can be expected from the regular PCR testing of tourism business employees, this requires large-scale PCR testing, which is currently a major barrier; while there are also major effects from prospective tracking tests for people in close contact with those testing positive for infection, there is a limit to methods that rely only on human labor. While the introduction of contact tracing apps is effective as a countermeasure therefor, there is a need for further improvement in the registration delay time and the implementation systems for prompt PCR testing after notification. The study included a number of assumptions based on limited information during the early period of infection expansion, when infection in tourism locations had not been analysed in detail. Therefore, update of the model reflecting the following is very important to overcome the limitation of this study in the future. • Behaviour of agents based on psychological or sociological theories • Comprehensive review of the literature based on up-to-date data on actual phenomena and their analysis • Calibration of the model with actual data • Exploring the methodological aspects of the investigations, including accuracy, difficulty, and cost
References 1. Aleta, A., Martín-Corral, D., y Piontti, A.P., Ajelli, M., Litvinova, M., Chinazzi, M., Dean, N.E., Halloran, M.E., Longini Jr, I.M., Merler, S., et al.: Modelling the impact of testing, contact tracing and household quarantine on second waves of COVID-19. Nat. Hum. Behav. 4(9), 964–971 (2020) 2. Epstein, J.M.: Generative Social Science: Studies in Agent-based Computational Modeling, vol. 13. Princeton University Press, Princeton (2006) 3. Ferguson, N., Laydon, D., Nedjati Gilani, G., Imai, N., Ainslie, K., Baguelin, M., Bhatia, S., Boonyasiri, A., Cucunuba Perez, Z., Cuomo-Dannenburg, G., et al.: Report 9: impact of nonpharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand. Imperial College London (2020). https://doi.org/10.25561/77482 4. Harada, T., Murata, T.: Reconstructing prefecture-level large-scale household composition using parallel computing. Trans. Soc. Instrum. Control. Eng. 54(4), 421–429 (2018) 5. Hunter, E., Mac Namee, B., Kelleher, J.: An open-data-driven agent-based model to simulate infectious disease outbreaks. PloS One 13(12), e0208775 (2018) 6. Japan Tourism Agency: Change in the treatment of Tokyo in Go To Travel campaign (updated on September 18, 2020) (2020). https://biz.goto.jata-net.or.jp/info/2020091601.html 7. Japan Tourism Agency Strategy: Travel/tourism consumption trend survey of 2019 annual (2020). https://www.mlit.go.jp/kankocho/siryou/toukei/content/001342441.pdf 8. Kurahashi, S.: A health policy simulation model of Ebola haemorrhagic fever and Zika fever. In: Agent and Multi-Agent Systems: Technology and Applications, pp. 319–329. Springer, Berlin (2016) 9. Kurahashi, S.: Agent-based health policy gaming and simulation for Ebola haemorrhagic fever. Stud. Simul. Gaming 26(2), 52–63 (2017) 10. Kurahashi, S.: An agent-based infectious disease model of rubella outbreaks. In: Agents and Multi-agent Systems: Technologies and Applications 2019, pp. 237–247. Springer, Berlin (2020)
Evaluation of COVID-19 Infection Prevention …
27
11. Kurahashi, S.: Estimating effectiveness of preventing measures for 2019 novel coronavirus diseases (COVID-19). Trans. Jpn. Soc. Artif. Intell. 35(3), D–K28_1 (2020) 12. yeon Lee, J., Hong, S.W., Hyun, M., Park, J.S., Lee, J.H., Suh, Y.S., Kim, D.H., Han, S.W., Cho, C.H., ah Kim, H.: Epidemiological and clinical characteristics of coronavirus disease 2019 in Daegu, South Korea. Int. J. Infect. Dis. 98, 462–466 (2020) 13. Liu, F., Enanoria, W.T., Zipprich, J., Blumberg, S., Harriman, K., Ackley, S.F., Wheaton, W.D., Allpress, J.L., Porco, T.C.: The role of vaccination coverage, individual behaviors, and the public health response in the control of measles epidemics: An agent-based simulation for California. BMC Public Health 15(1), 447 (2015) 14. Ministry of Health, Labour and Welfare: Novel coronavirus disease - situation in Japan (2021). https://www.mhlw.go.jp/stf/covid-19/kokunainohasseijoukyou.html 15. Nihon Keizai Shimbun: Real annual rate of GDP increase of 12.7% for the fourth quarter of fiscal 2020, decrease of 4.8% for the full year of 2020. Nihon Keizai Shimbun February 15, 2021 (2021) 16. Ohkusa, Y.: An evaluation of counter measures for smallpox outbreak using an individual based model and taking into consideration the limitation of human resources of public health workers. J. Health Care Med. Community 16(3), 275–284 (2006) 17. Okinawa Times: Economic loss of 186.7 billion yen in Okinawa. Okinawa Times Plus April 4, 2020 (2020) 18. Silva, P.C., Batista, P.V., Lima, H.S., Alves, M.A., Guimarães, F.G., Silva, R.C.: Covid-abs: an agent-based model of COVID-19 epidemic to simulate health and economic effects of social distancing interventions. Chaos Solitons Fractals 139, 110088 (2020) 19. Sorihashi, K.: Kobe City shares wisdom with Daegu City, South Korea about COVID-19. Mainichi Shimbun [Kobe version] August 12, 2020 (2020) 20. The Novel Coronavirus Pneumonia Emergency Response Epidemiology Team: The epidemiological characteristics of an outbreak of 2019 novel coronavirus diseases (COVID-19) - China, 2020. China CDC Weekly 42(2), 145–151 (2020). https://doi.org/10.3760/cma.j.issn.02546450.2020.02.003 21. Tokyo Metropolitan Government Bureau of Social Welfare and Public Health: In- formation about novel coronavirus disease (2020). https://www.fukushihoken.metro.tokyo.lg.jp/smph/ tthc/kansensho/singatakorona.html 22. WHO-China Joint Mission Team: Report of the WHO-China Joint Mission on Coronavirus Disease 2019 (COVID-19). World Health Organization (WHO) (2020). https://www.who.int/ docs/default-source/coronaviruse/who-china-joint-mission-on-covid-19-final-report.pdf
DIPP: Diffusion of Privacy Preferences in Online Social Networks Albert Mwanjesa, Onuralp Ulusoy, and Pınar Yolum
Abstract Ensuring the privacy of users is a key component in various computer systems such as online social networks, where users often share content, possibly intended only for a certain audience but not others. The content users choose to share may affect others and may even conflict with their privacy preferences. More interestingly, individuals sharing behaviour can change over time, which indicates that privacy preferences of individuals affect others and spread throughout the network. We study the spreading of privacy preferences in online social networks with information diffusion models. Using multi-agent simulations, we show the dynamics of the spread and study the factors (e.g., trust) that influence the spread. Keywords Privacy · Diffusion · Multi-agent systems
1 Introduction With the rise of social media and the prevalence of smartphones, humans are more connected online than they have ever been before. In general, this means people can easily communicate with each other no matter their location, physical social circles and so on. The communication is many times done on public platforms, such as online social networks (OSNs) in the form of sharing content, such as pictures, videos, and so on. The content shared by users can reveal their private information, either explicitly or implicitly. The problem of privacy preservation in OSNs can be viewed from various angles due to its interdisciplinary properties. Dupree et al. [4] investigate privacy personas in online social networks. Using a survey, data are gathered on user behaviour towards privacy and security. A cluster analysis shows that individuals can be categorized in A. Mwanjesa (B) · O. Ulusoy · P. Yolum Universiteit Utrecht, Utrecht, The Netherlands, Princetonplein 5, Utrecht 3584, CC, The Netherlands e-mail: [email protected] P. Yolum e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_3
29
30
A. Mwanjesa et al.
one of the following categories: Fundamentalists, Lazy Experts, Technicians, Amateurs and the Marginally Concerned in terms of their knowledge in privacy and motivation in sharing content. Interestingly, a source of influence in privacy decisionmaking is an agent’s trust towards their neighbours. A qualitative study by Lampinen et al. [8] have found that privacy management in content sharing decisions is mostly based on trust. Users expect each other to understand the way they want to represent themselves in OSNs. To ensure this, users are said to utilize mental/behavioural strategies as well as preventive, corrective and collaborative strategies. Researchers have investigated the problem of privacy preservation in OSNs using approaches based on multi-agent systems. Kokciyan and Yolum [7] use commitments between agents and norms [3] in a system to prevent and detect privacy violations. Ulusoy and Yolum in [12] have shown that online social networks can leverage social norms as they emerge from personal norms. Understanding the evolution and spread of privacy norms is important because they can serve as an important tool to understand and avoid privacy violations. Privacy can also be violated by other users as well when content is shared that reveals private information about someone without their consent. Sharing content that is owned by multiple users can lead to leaking of private information. Social norms can be used to guide privacy management, but personal privacy preferences are unclear to other users. It is evident that one’s level of openness is not the same for all and violations can occur. It is, thus, useful to understand how privacy preferences spread through a social network. To model the spread of privacy preferences, inspiration is taken from information diffusion research. Information diffusion research tries to understand how information spreads in, for example, OSNs [5]. The diffusion of information has often been modelled using epidemic models. Epidemic models can be seen as state machines governed by probabilities of transitioning from state to state. They contain three common states: (S)usceptible, (I)nfected, (R)ecovered. There are different compositions given these states, from SI, SIS and SIR to many more variations. Originally, epidemic models were used to model the spread of infectious diseases, hence the name. But they were adopted for information diffusion, as it was theorized that information can be infectious [9]. These models are flexible, as illustrated in work by Cannarella et al. in [1]. The researchers used an epidemic model to predict adoption and abandonment of a social media platform. Here, we argue that privacy preferences can be derived from the privacy decisions users make. The context of content shared by users describes their privacy preference, as elaborated later. We study the following research questions. • RQ1: How does a privacy preference of posting content that fits in a given context spread? • RQ2: Who are the most influential figures in the spreading process of the privacy preferences? • RQ3: What is the influence of trust modelling in the spread of privacy preferences? In order to answer these research questions, we aim to contribute to the literature by (i) investigating whether epidemic models are accurate models to model the spread
DIPP: Diffusion of Privacy Preferences in Online Social Networks
31
of privacy preferences (ii) analysing the influence of different factors in a social network on this diffusion process, (iii) simulating the diffusion of privacy decisions, and (iv) providing a method for users to protect themselves against opposed privacy preferences, namely using trust modelling. The rest of this paper is organized as follows. Section 2 develops our model for privacy preference diffusion. Section 3 explains details for realizing the model as an agent-based system. Section 4 depicts our experiments and results. Finally, Sect. 5 discusses the work with pointers to future work.
2 Modelling Privacy Preference Diffusion We model an online social network (OSN) as a multi-agent system, where each agent represents a user that shares content in a given context. In real life, when a user observes a piece of content on an OSN, she perceives different properties of the content that leads the user to react such as to appreciate by in the form of a like, to dislike it, or at times to ignore the content. Imagine a photo of someone taking a selfportrait photo of a user at the pet zoo with a llama in the morning. The description of the scene of the photo captures its context. This context defines location (i.e., zoo), people in the picture (i.e., user), what is happening in the picture (i.e., posing with a llama) and the time of day (i.e., the morning). In this work, the context of the content shared by a user on an OSN is assumed to reveal the privacy preferences of that user on said OSN. This follows from the assumption that a user only shares content that they want to be seen by their friends on an OSN. Context, in this study, is described using three locations and four times of day. The four times of day are morning, afternoon, evening and night. The three locations are at work, the beach and the mall. The contexts are represented as 2-tuples with 12 possible combinations, for example: < night, wor k >. Definition 1 (Privacy preference) A context description that can be ascribed to the content shared by an agent with the assumption that agents only share content they do not find to be private. Privacy violations in OSNs occur when content reveals information about a user that the user would want to keep private. Opposed privacy preferences can be deemed to describe content that a user believes should be private. In this project, the focus is on the unopposed privacy preferences. However, to capture privacy violations, the spread of opposing privacy preferences is modelled in two different ways. It is assumed that opposing privacy preferences either spread the same way as their unopposed counterparts or are static. Privacy violations can occur when a user shares content that is co-owned by another user and, in doing so, reveals private information of said user. Using privacy violations, we can model trust between agents on an OSN. Using the model, we will investigate if the use of trust values can help agents protect themselves from opposed privacy preferences.
32
A. Mwanjesa et al.
These are the underpinnings of this research; privacy preferences are infectious via the medium of the content sharing habits of friends. Users perceive friends sharing certain content and are deemed more likely to adapt the same habits. In line with information diffusion models, we believe that privacy preferences are, thus, infectious. This section outlines the global theories on how these infections come about and how they could be influenced. Definition 2 (Co-owned content) Content is co-owned when it represents more than one user. Representation could be by appearance in a picture, sound of a voice in a video or audio recording, explicit tagging of users and more. Definition 3 (Privacy violation) A privacy violation in an OSN is an instance in which content is shared without the explicit consent of a user. Furthermore, this content is in a context that the user opposes sharing content of. The next sections present a model that simulates the spread of privacy preferences and the varying factors that are believed to affect this dynamic in OSNs. Opposed privacy preferences, co-ownership, privacy violations and trust are believed to impact the diffusion of privacy preferences in reality.
2.1 SIR Model for Privacy Preference Diffusion The SIR model is an epidemic model that is governed by an infection rate and a recovery rate. An agent in the model can be in one of these three states: (S)usceptible, (I)nfected and (R)ecovered. An agent starts in a susceptible state, meaning it can be infected by an infectious entity. When the infectious entity infects an agent, the agent moves to the infected state. If and when an agent recovers from this infectious entity, the agent is in the recovered state. Being in the recovered state is synonymous with being immune to the infectious entity, as there is no state transition out of the recovered state. The state dynamics of the SIR model are governed by an infection rate i and recovery rate r , as visualized in Fig. 1a. We use the SIR model for privacy preference diffusion for OSNs where the semantics of infection are applied to privacy. Thus, (S)usceptible state means that agents can imitate certain behaviour if they witness it from their neighbours; e.g., observe content that depict certain privacy preferences. (I)nfected state denotes that the agent has observed another agent to whom it connected, imitated its behaviour by sharing a content with the said privacy preference. This will be reflected by an infection rate for that privacy preference. (R)esistant state denotes that the agents have recovered from an infection of a privacy preference and because they are now immune to the said privacy preference, they will not share content aligned with said privacy preference. An agent can recover from an infection when content they have shared has negative consequences on them, or when sharing certain types of content becomes less fashionable. An example to the first case is content shared by an agent can lead to privacy violation of a friend, having a negative impact on their relationship. For
DIPP: Diffusion of Privacy Preferences in Online Social Networks 1−i S
33
1−r i
r
I
R
(a) State transitions in the SIR model with infection rate i and recovery rate r 1 − i × tr 1−r S
i × tr
I
r
R
(b) State transitions in the SIR model with the infection rate i and recovery rate r with tr representing the trust in the neighbour trying to infect the agent 1 − i × co S
1−r i × co
I
r
R
(c) State transitions in the SIR model with the infection rate i and recovery rate r with co in the case of an infection attempt via co-ownership
Fig. 1 State transitions in the DIPP model
the second infection recovery case, one infamous example is the challenge trends on social media, where sharing a video of doing a specific task becomes viral for a time period but in time it becomes out of fashion and people recover from this infection. Thus, the states determine the likelihood of an agent to share content described by a certain context. These state transitions are governed, mainly, by the infected rate i and recovery rate r and can be seen in Fig. 1a. From these models, the key measurements are population proportions for each state, as is common in diffusion modelling. The epidemic model starts with multiple infectious entities. To make the model more realistic, in terms of modelling the spread of privacy preferences, concepts such as opposing infectious entities, trust and co-ownership of content are added. There could be different models that might have worked in this study, such as the DeGroot model [2], but considering the flexibility, dynamic trust values, and foundation in previous research, we opt for the SIR Model.
2.2 Trust Research in privacy has shown that privacy management in content sharing of OSNs is largely dependent on trust among users [8]. Inspired by that, the DIPP model implements one-dimensional trust modelling. One-dimensional trust refers to an agent’s belief in another agent to complete a task with one measurement of success [10]. The task delegated in the DIPP model is for a neighbour to respect an agent’s
34
A. Mwanjesa et al.
privacy. Trust in the DIPP model is influenced by interactions between agents of which they are two: perceiving another agent’s shared content and being a co-owner of the content shared by another agent. To model trust, the beta distribution is used [6]. This probability distribution is governed by two variables α, β that can be used to capture interaction outcomes [10]. We use α to capture the cases where a privacy violation has not taken place and β for cases where there is a privacy violation. This leads to two values influenced by the two interactions, but based on the same delegated task. We differentiate between the two types of privacy violations as follows. A Level 1 privacy violation occurs when an agent perceives another agent sharing content that portrays a privacy preference that they oppose. A Level 2 privacy violation occurs when an agent shares content without a co-owners’s consent. The trust values tr1 , tr2 and severity levels s1 , s2 represent the trust value for a type of privacy violation and the perceived severity of said violations respectively. Finally, the two values for the two types of violations are integrated into one with weights an agent uses to express what they see as more severe tr = s1 × tr1 + s2 × tr2 with s1 + s2 = 1. The trust value directly influences the infection rate as can be seen in Fig. 1b. In this project, the severity levels are static over all simulations.
2.3 Co-ownership Users should be more likely to be infected with a privacy preference when they are coowner of the content being shared. To this end, 1.5 > co > 1.1 is a random number that represents how impressionable an agent is, with regard to sharing content they have doubts over. As with h, co is a random number to capture the fact that not all agents in a social network can be persuaded on the same level and this resilience is also not static overtime. The state dynamics given content co-ownership are visualized in Fig. 1c.
3 Realization of the DIPP Model In the DIPP model, an agent represents a user on an online social network that shares content from which other agents can derive said agent’s privacy preferences. To this end, each agent must have the ability to share content and all their friends should be able to perceive the content. A schematic view of an agent in the DIPP model can be seen in Fig. 2a. We have implemented the DIPP model on top of the Mesa library for Python 3.1 The time steps are discrete. 1
https://mesa.readthedocs.io/en/master/overview.html.
DIPP: Diffusion of Privacy Preferences in Online Social Networks
(a) Workings of an agent in the SIR model for privacy preference diffusion.
35
(b) Workings of the DIPP model
Fig. 2 Schematic view of the components in an agent and the DIPP model
Agent initialization: Each agent is assigned privacy preferences on the pro and anti sides. For each privacy context, the agent is randomly assigned either the state susceptible or infected, with a probability of 0.5. The resistant state is ignored here as it is a final state and would thus limit the dynamics in the experiments. Definition 4 (Pro side) The epidemic of privacy preferences that are supported by agents. These are the privacy preferences that underlie the infected agents’ content sharing habits. Definition 5 (Anti side) The epidemic of privacy preferences that are opposed by agents. Infected agents on this side believe that the content, described by the privacy preference they are infected with, should not be shared. For the epidemic dynamics to work, infection rates and recovery rates have to also be assigned for each of the privacy preferences. These can be single numeric values that hold for either pro-side or anti-side. However, the rates can also be defined as a mapping between a context and a numeric value, allowing for precise custom assignment of rates for any experimental setting. Agent Actions: Every time an agent takes a step in this model, they do at least three things: share content if possible on pro-side, share content if possible of anti side and try to recover from an infection. An agent shares content by randomly choosing one of the contexts they are infected with on the pro-side. Once content is shared, neighbours can perceive and become infected by the context of the content. The infection rate is dependent on opposing privacy preferences, trust and co-ownership. The final part of a step is the attempt to recover from an infection. In this part, a context tuple is chosen randomly from all the context tuples for which the agent is infected. The context tuple’s recovery rate is the probability that the agent will recover in this step. Simulations: The DIPP model simulates the spread of privacy preferences on an OSN. A schematic view of the DIPP model can be seen in Fig. 2b. For a researcher,
36
A. Mwanjesa et al.
the model is the interface to interact with to run any simulations. It creates agents for each node on the social network provided to it. The model collects for each context tuple the number of agents in each of the three states. It also collects data on the number of state changes per step. Finally, the model also keeps track of the infection chains for each context tuple. The model also provides an interface to customize the initial outbreak of privacy preferences. Furthermore, it also provides the ability to specify the state of a specific agent regarding a specific context tuple. It is possible to run a simulation for any number of steps. However, there is also an option to run a simulation until the state dynamics have reached a stable condition. Stability is reached when a predefined number of steps has passed and there have been fewer state changes than a predefined fraction of the population.
4 Evaluation We use simulations to evaluate if our proposed DIPP model can indeed model the spread of privacy preferences through privacy decisions. Given the data from simulations with different settings, changes in the state dynamics and agent influence on the diffusion processes can be evaluated.
4.1 Measurements In information diffusion modelling, influence has always been important. Sometimes influential users in a network are sought after for different reasons. It might be because they could be used to promote information on the network. In other cases, they might be deemed bad influences. The influence rating of an agent is the count of each descendant proportional to the length of the shortest path between the agent and the descendant in the infection chain. One of the goals of this project is to explore whether trust values can be used by agents to protect themselves against opposed privacy preference. To this end, the epidemic endings are measured. An epidemic ending of a privacy preference is the step at which no agent on the network is infected with said privacy preference. Similarly, an important goal is to explore the effect of trust in slowing down infections. To this end, the epidemic peaks are measured. An epidemic peak of a privacy preference is the maximum number of infected agents over the period of epidemic, infected with that privacy preference.
DIPP: Diffusion of Privacy Preferences in Online Social Networks
37
4.2 Experiments For the purposes of answering the research questions of this project, simulations are run with the model described in the previous section using a real online social network data set called Copenhagen Networks Study interaction data set. This data set represents a multi-layer temporal network collected at the Technical University of Denmark from freshmen students at the same university [11]. Here, we make use of the Facebook friendship network, which is an undirected graph with edges representing friendships that last during the whole experiment in which the data were collected. The degree distribution of this network shows some similarity with the power law distribution, which is often regarded to be representative of social networks. Some settings or setups are constant across all simulations. These settings will be explained first. After this, the specific simulations are put forth along with the expectations from these experiments. Basic settings. All the simulations use flat infection rates and recovery rates. This means that all the privacy preferences have the same rates on both pro and opposing sides. Three rates are chosen, {0.25, 0.50, 0.75}. These three values are varied over the four variables: pro-infection rate, pro-recovery rate, anti-infection rate and antirecovery rate. These lead to a total of, at most, 34 = 81 combinations of rates to run. In the case of disregarding the opposing preference dynamic, there is no need to include the anti infection and recovery rates. Consequently, there are 32 = 9 combinations of rates to run. Every simulation is executed 100 times to account for the stochastic nature of the model’s dynamic. 100 repetitions is deemed to be the right compromise between correctness of the results and the run time of the experiments. This means that one experimental setting amounts to 900 or 8100 simulation runs. A simulation is run until a stable condition is reached. Stability is defined using a look back of 20 steps and a fraction of the population of 0.05. This means that a simulation ends when in the last 20 steps there have been fewer state changes than 0.05 × 800 = 40. These basic settings are used to generate baseline data. The baseline experiment to show that: • 1a a privacy preference epidemic will last longer when the infection rate of said privacy preference is higher than its recovery rate, • 1b this setting will show a positive correlation between the degree of an agent on the network and its influence on the spread of privacy preference. These assertions need to hold for the model to, at least, be a valid epidemic model. Thus, they are the foundations of the DIPP model. Furthermore, a node’s degree has been shown previously to correlate with the influence of that node in information diffusion [9]. That feature is expected to stay the same in the context of privacy preference diffusion as modelled here. In Fig. 3a, c and d, we can see aggregated state dynamics from the baseline simulation for the pro-side epidemic of privacy preference < N ight, W or k >. When all rates are equal, there is a steep increase in recovered agents and almost linear decrease in the number of susceptible and infected
38
A. Mwanjesa et al.
(a) pro-infection rate 0.25, prorecovery rate 0.25, anti-infection rate 0.25, anti-recovery rate 0.25
(b) pro infection rate 0.25, pro recovery rate 0.25, anti infection rate 0.25, anti recovery rate 0.25
(c) pro infection rate 0.75, pro recovery rate 0.25, anti infection rate 0.25, anti recovery rate 0.75
(d) pro infection rate 0.25, pro recovery rate 0.75, anti infection rate 0.75, anti recovery rate 0.25
Fig. 3 State dynamics of baseline simulations of the DIPP model. (S)uscpetible (I)nfected (R)ecovered
agents. When the anti-side epidemic is weak, Fig. 3c, the number of infected agents increases after the start steeply, while the number of susceptible agents decreases steeply. It should be noted that when the anti-side epidemic is weak there comes a point in the simulation at which there are no more susceptible or infected agents. This is in contrast to when the pro-side epidemic is weak, Fig. 3d. The roles in decreasing and increasing are reversed, but it is also noticeable that the final number of susceptible agents is higher and never goes to zero. Simulation results indicate that a privacy preference epidemic will last longer when the infection rate of said privacy preference is higher than its recovery rate, as expected with hypothesis 1a. This result shows support for using information diffusion models in the modelling of the spread of privacy preferences as an answer to RQ1. Furthermore, a positive correlation is found between the degree of an agent on the network and its influence on the spread of privacy preference, consistent with hypothesis 1b. This result also shows us that the most influential agents are the agents with the highest degrees in the network, which answers RQ2. Rare privacy preference with high degree influencers. Experiments in this setting are performed to investigate the ability of a few most influential users to spread rare
DIPP: Diffusion of Privacy Preferences in Online Social Networks
39
privacy preferences. Rarity is defined as 0.2% of the network being infected with the privacy preference. In other words, 0.2% of the agents will be infected for the context tuples above. Furthermore, the top 1% of the agents, with regard to degree, are assigned the rare privacy preference. With this setting, it is hypothesized that: • 2a the influence of the top 1% of agents on the spread of the rare privacy preference will be significantly higher than in the baseline simulation setting. After running the simulations, these were the results. The influence ratings of the best-connected agents rise when they are tasked with spreading a rare privacy preference in 76 out of 81 parameter settings, when opposing privacy preference dynamics are equal to their unopposed counterparts. This hypothesis 2a holds in all parameter settings when opposing privacy preferences are assumed to be static. This result further supports the idea that the agents with the highest degree in the OSN are the most influential in the spread of privacy preferences, as an answer to RQ2. Trust settings. The following trust settings are simulated: (i) Static trust investigates how privacy preferences spread when agents trust each other to some degree and this level of trust remains static throughout an experiment. (ii) Dynamic trust investigates how privacy preferences spread when agents trust each other to some degree and this level of trust changes over time. (iii) Dynamic trust with two levels of privacy violations investigates how privacy preferences spread when agent model trust based on two types of privacy violations. Before running these simulations, simulations are run in which all trust values are 0 throughout the simulations. These simulations are used as sanity checks to ensure that without trust, no new infections are able to appear. For all three of these settings, the following claims are expected to hold: • Hypothesis 4a: the epidemics will last shorter than in the baseline simulations, as the inclusion of trust negatively influences the infection rate. • Hypothesis 4b: agents’ influence on the spread of privacy preferences will be diminished compared to the baseline simulations, since how much an agent is trusted by their neighbours now becomes a factor as well. • Hypothesis 4c: epidemic peaks will be lower than in the baseline simulations. An example of the state dynamics, when trust is introduced, can be seen in Fig. 3b. It is noticeable that the decrease in the number of infected agents is quicker than baseline simulation where all rates are equal, Fig. 3a. This is because now privacy violations are registered and they affect the general infection rate of the < N ight, W or k > privacy preference. Simulation results show that when opposing privacy preference dynamics are equal to their unopposed counterparts, the introduction of trust modelling does not reduce the overall impact of privacy preference epidemics in all settings with regard to epidemics ending sooner and peaking lower. However, the influence of agents is reduced when trust modelling is introduced in the DIPP model. When opposing privacy preferences are assumed to be static, the introduction of trust modelling does reduce the impact of privacy preference epidemics. This result suggests that the introduction of trust modelling reduces the impact of privacy preference epidemics due to privacy violations, which answers RQ3.
40
A. Mwanjesa et al.
5 Conclusion Simulations using the DIPP model show that it provides a solid foundation to model the spread of privacy preferences. The well-connected agents are the most influential in the spread of privacy preferences in the DIPP model. The trust modelling allows agents to protect themselves against unwanted privacy preferences. In the current model, an agent shares content and tries to recover from an infection at each step in the simulation. But this may not be realistic. In the state dynamics graphs, the number of agents that have recovered increase from step 0 in very similar fashion across different settings. A study of Facebook user sharing habits2 found that the assumed uniformity in OSN use of all agents is not realistic. Our model can be used to configure the probability that agents take certain actions in further iterations of the DIPP model. Such extensions to the DIPP model can easily be made due to the modularity of our implementation. This study did not examine the influence of network structure on the spread of privacy preferences. It is clear that, although the network used is realistic, network structure can influence the results of our experiments. This is a point of attention for further research on this topic.
References 1. Cannarella, J., Spechler, J.A.: Epidemiological modeling of online social network dynamics (2014). arXiv:1401.4208 2. DeGroot, M.H.: Reaching a consensus. J. Am. Stat. Assoc. 69(345), 118–121 (1974) 3. Dignum, F.: Autonomous agents with norms. Artif. Intell. Law 7(1), 69–79 (1999) 4. Dupree, J.L., Devries, R., Berry, D.M., Lank, E.: Privacy personas: clustering users via attitudes and behaviors toward security practices. In: Proceedings of the Conference on Human Factors in Computing Systems (2016), pp. 5228–5239 5. Guille, A., Hacid, H., Favre, C., Zighed, D.A.: Information diffusion in online social networks: A survey. ACM Sigmod Record 42(2), 17–28 (2013) 6. Josang, A., Ismail, R.: The beta reputation system. In: Proceedings of the 15th Bled Electronic Commerce Conference, vol. 5, pp. 2502–2511 (2002) 7. Kökciyan, N., Yolum, P.: PriGuard: a semantic approach to detect privacy violations in online social networks. IEEE Trans. Knowl. Data Eng. 28(10), 2724–2737 (2016) 8. Lampinen, A., Lehtinen, V., Lehmuskallio, A., Tamminen, S.: We’re in it together: interpersonal management of disclosure in social network services. In: Proceedings of the Conference on Human Factors in Computing Systems, pp. 3217–3226 (2011) 9. Li, M., Wang, X., Gao, K., Zhang, S.: A survey on information diffusion in online social networks: models and methods. Information 8(4), 118 (2017) 10. Reece, S., Rogers, A., Roberts, S., Jennings, N.R.: Rumours and reputation: evaluating multidimensional trust within a decentralised reputation system. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1–8 (2007) 11. Sapiezynski, P., Stopczynski, A., Lassen, D.D., Lehmann, S.: Interaction data from the copenhagen networks study. Sci. Data 6(1), 1–10 (2019) 12. Ulusoy, O., Yolum, P.: Emergent privacy norms for collaborative systems. In: International Conference on Principles and Practice of Multi-Agent Systems pp. 514–522. Springer, Berlin (2019)
2
https://www.frac.tl/work/marketing-research/facebook-user-sharing-habits-study/.
Explaining and Resolving Norm-Behavior Inconsistencies—A Theoretical Agent-Based Model Marlene C. L. Batzke and Andreas Ernst
Abstract It is often assumed, that once people have internalized a norm, they behaved accordingly. However, empirical research has repeatedly shown inconsistencies between personal norms and behavior. In order to provide a better understanding of these inconsistencies and thus norm-based dynamics, we developed an agent-based model that includes a norm internalization process as well as a theory on how personal norms translate into behavior. The internalization process is embedded in a psychological theory of decision-making, containing different types of norms and other motivational factors. That allows investigating the behavioral consequences of internalized norms, explaining norm-behavior inconsistencies and exploring possibilities for their resolution. The agent-based DINO model was implemented within the context of a social dilemma game. The model illustrates how personal norms become behaviorally effective, whereas agents are able to develop conflicting personal norms and to behave contrary to their internalized norms. Reasons for norm-behavior inconsistencies are analyzed and different norm-based interventions are tested regarding their efficacy to resolve norm-behavior inconsistencies. The DINO model shows the crucial role of not just adopting a normative belief but also rejecting conflicting ones. Keywords Norms · Internalization · Conflict · Learning · Social Dilemma
1 Theoretical Background 1.1 Norm-Behavior Inconsistencies The power of norms is long- and well-known in psychology (e.g. [5, 35]). In the past decades, norm-based interventions have received great attention (e.g. [12]). Contrary to policies in form of mandates and bans or economic incentives, information-based norm interventions often take the form of nudges, preserving people’s liberty in M. C. L. Batzke (B) · A. Ernst Center for Environmental Systems Research (CESR), University of Kassel, Kassel, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_4
41
42
M. C. L. Batzke and A. Ernst
decision-making [36]. Norm-based interventions have been shown to promote prosocial behavior [24], pro-environmental behavior [1], reduce risky behavior [2], just to name a few. Whereas they have sometimes shown effective in changing behavior, other times, they have not (for a review, see [26]). It is assumed that the key mechanism in achieving behavioral change is internalization of the respective norm [34]. This builds on the assumption that apart from the subjective representations of socially shared norms, people also develop their individual personal norms, a process referred to as norm internalization [22, 23]. In social simulation, an internalized norm is often considered to perfectly translate into behavior. For instance, Epstein [15] assumed that internalization is blind conformism with a norm, not thinking about it. In line with that assumption, Andrighetto and colleagues [4] introduced a cognitive model of norm internalization: the EMIL-I-A architecture. Once the EMIL-I-A agent has internalized a norm (and the social norm is still salient), the agent will stop the normative deliberation and comply with it. This conceptualizes an internalized norm as an automatism that disregards other normative and non-normative influences on decision-making. However, empirical research has shown that one can explain actual behavior based on people’s personal norms only to a limited extent (e.g. [7, 20]). This discrepancy is also mirrored in the attitude-behavior gap (e.g. [10]). Scholars are still trying to figure out when and why behavioral effectiveness of personal norms can be expected. In the present work, we aim to provide a better understanding of the mechanisms underlying norm-based interventions, focusing on the discrepancies between personal norms and behavior, which we call norm-behavior inconsistencies. Whereas one might argue that behavioral compliance is more important than norm-behavior consistency, research shows that long-term behavioral change results from internal change [19, 30]. Although this seems psychologically plausible, there is little knowledge about how exactly this internal change needs to look like. Social simulation represents a suitable complementary approach to this question, allowing to directly test the effects of internal states regarding their behavioral outcomes. The addition of simulation methods to empirical ones seems especially worthwhile regarding a concept such as norm internalization that is challenging to pin down methodologically, for example due to confounding with behavioral measures, people’s lack of introspection, their proneness to answering socially desirable or other biases. In order to investigate norm-behavior inconsistencies, we developed an agent-based model that includes a norm internalization process (i.e. the process of developing personal norms) as well as a theory on how personal norms translate into behavior that allows for empirically observed inconsistencies to arise. The internalization process is embedded in a psychological theory of decision-making, containing different types of norms as well as other motivational factors. That allows us to: 1. 2.
explain norm-behavior inconsistencies and explore possibilities for their resolution.
Explaining and Resolving Norm-Behavior Inconsistencies …
43
1.2 A Norm-Based Theory of Decision-Making The underlying norm-based theory of decision-making is based on an extension of the theory of reasoned action [18]. We consider six direct behavioral motivators. First, there are an individual’s inner strivings, which we call its goals. Based on Deutsch’s [14] social value orientations, we assume three different goals that agents may strive for: the individualistic, cooperative and competitive goal. Second, there are the outer, social influences, namely social norms. Generally, we define a norm as a behavioral rule for a specific situation and differentiate between its subject (social vs. personal) and quality (descriptive versus injunctive). Social norms are norms shared by several individuals, whereas personal norms are norms of an individual. Hence, we consider two types of social norms: social descriptive norms (“what others usually do”) and social injunctive norms (“what others consider (in)appropriate”) [11]. Third, we included habits in our theory of decision-making, which we conceptualize as personal descriptive norms (“what I usually do”). They have been shown to explain additional variance in behavioral decisions [38]. On top of these six direct behavioral motivators, we consider norm internalization as the process of learning personal injunctive norms (“what I consider (in)appropriate”), influencing decision-making indirectly. In line with the theory of reasoned action and other expectancy-value models (e.g. [3, 6]), each motivational factor is characterized by a situation variable, representing the subjective perception of the situation, and a personality variable, representing interindividual differences in the importance. We assume that personal injunctive norms arise in the interaction of situation and person, representing the individual learning effect, depending on personality and past experiences. How does norm internalization proceed? We conceptualize norm internalization as a motivated reasoning process after a decision has been made [8, 16, 33]. To make the normative judgement, one looks for reasons that support the decision. Based on this evaluation, the personal injunctive norm that relates to the last chosen action can be accepted (representing a belief of appropriateness) or rejected (representing a belief of inappropriateness) [9]. How do personal injunctive norms influence decision-making? Norm internalization influences decision-making on a higher level, emphasizing or inhibiting the importance of the other motivational factors. Hence, they influence decision-making indirectly. This relates to the idea of integration of a norm into the personality and transforming the individual [31, 37]. Personality is represented by the entirety of personality variables. In order to reduce the numeric possibilities of different personalities, but still show the range of different personalities that the model may portray, we developed seven psychologically plausible personality types. They range from strong cooperators to strong defectors [27, 28]. Intermediate types (i.e. conditional cooperators) show greater susceptibility to social influences and are internally more conflicted, having more contradictory motivations [17].
44
M. C. L. Batzke and A. Ernst
2 Method The agent-based DINO model—Dynamics of Internalization and Dissemination of Norms—was implemented within the context of an iterated 3-person social dilemma game, plausibly describing the core of a conflict common to many situations [13]. Due to the complexity of our theoretical assumptions, a simple game-theoretical environment was chosen for a first implementation and testing of the theory. Since three agents I = {1,2,3} already represent a small group, the number of agents can easily be increased in future model extensions while the same logics apply. Each time step, an agent i ∈ I chooses one of two behavioral actions ai ∈ {0,1}: cooperation (ai = 1), representing the prosocial choice, or defection (ai = 0), representing the egoistic choice. When individual i cooperates, each individual receives benefits b. However, i must incur a cost c. The payoff of i (Pi ) is a function of his action and the actions of the others. The payoff for an individual 1 is given by: P1 = 1 + (a1 + a 2 + a3 )b − a1 c with 3b > c > b > 0
(1)
This features the typical characteristics of a social dilemma game. We used a payoff matrix with b = 1 and c = 2. Model execution terminates after 300 time steps, being a period after which parameters have stabilized in most cases.
2.1 Agents’ Decision-Making The decision-making architecture was inspired by the agent-based KIS model [29]. Agents use a weighted multi-attribute utility matrix for decision-making, with the six direct motivational factors (individualistic goal (IND), cooperative goal (COOP), competitive goal (COMP), social descriptive norm (SDN), personal descriptive norm (PDN) and social injunctive norm (SIN)) in the columns and the action options (cooperation and defection) in the rows. The action strengths to the motivational factors of an agent i are specific for a behavioral action (C for cooperation and D for defection) and situationally adapted over time t. Agent heterogeneity is represented in agents’ personality, being weights w to the motivational factors. The intention for each action is the weighted sum of the respective action strengths, described through the following function, exemplified for the intention to cooperate: intention C,i,t = w I N D,i,t I N DC,i,t + wC O O P,i,t C O O PC,i,t +wC O M P,i,t C O M PC,i,t + w S D N ,i,t S D NC,i,t +w P D N ,i,t P D NC,i,t + w S I N ,i,t S I NC,i,t Agents perform the action with the higher intention.
(2)
Explaining and Resolving Norm-Behavior Inconsistencies …
45
2.2 Norm Internalization and Its Effects on Decision-Making As psychological research suggests that all the motivational factors in decisionmaking also influence norm internalization, agents evaluate the intention of the last chosen action, which combines all of the factors. Based on the maximum intention that could be achieved with regard to their personality, agents calculate the resulting relative intention and compare it to an individual threshold. Based on this evaluation, an agent accepts or rejects the personal injunctive norm that relates to the last chosen action in a stepwise process, according to the following adaptation rule: if relative intentioni,t > individual threshold then PIN i,t + PIN-step-size otherwise PIN i,t
PIN-step-size
Personal injunctive norms influence decision-making indirectly, emphasizing or inhibiting the importance of the other motivational factors. Agents check each motivational factor whether it supports the same action as a personal injunctive norm (through comparing the two action strengths of a motivational factor). Through multiplying the matching personal injunctive norm with the according weights, personal injunctive norms reinforce norm-consistent factors or inhibit norm-inconsistent factors. Agents’ weights are adapted according to the following rule, exemplified for the cooperative goal: if COOPC,i,t > COOPD,i,t then wCOOP,i,t PIN C,i,t otherwise wCOOP,i,t PIN D,i,t
3 Results We define norm-behavior inconsistencies as the discrepancy between the stronger personal injunctive norm (there are two, one for cooperation, C-PIN, and one for defection, D-PIN) and the behavior at a specific point in time. Results are analyzed across 84 different player combinations, since the developed seven agent types can form 84 different groups of three.
3.1 Explaining Norm-Behavior Inconsistencies In the DINO model, personal injunctive norms are strongly associated with the respective norm-consistent behavior (r D-PIN = 0.49 and r C-PIN = 0.93). However, internalized norms do not necessarily result in the according behavior. Whereas the model shows that an accepted personal injunctive norm strengthens the intention to show the according behavior, the behavioral intention conform to the internalized
46
M. C. L. Batzke and A. Ernst
norm might still be weaker than the other behavioral intention. In these cases, the personal injunctive norm is not behaviorally effective and inconsistencies between the internalized norm and chosen behavior arise. These discrepancies trace back either to weak motivational factors in favor of the internalized norm or to strong motivators inconsistent with the internalized norm. What are possible causes for the emergence of inconsistencies? Inconsistencies show a medium-sized correlation with the defectiveness of the group, which was based on agents’ personalities (r = 0.29). Hence, the more defective a group constellation, the less agents behave according to their internalized norm. Furthermore, as it seems plausible to assume that the relation of the two personal injunctive norms affects inconsistencies, we correlated the discrepancy between the two personal injunctive norms with norm-behavior inconsistencies (r = 0.15). However, this effect is most likely overridden by the effect of time, since the discrepancy between personal injunctive norms increases with time (r = 0.61) and inconsistencies decrease with time (r = −0.43). Therefore, we correlated the discrepancy between personal injunctive norms and inconsistencies only for the first 100 time steps of the game. It shows that the two are highly negatively associated (r = −0.72), indicating that stronger discrepancy between the two personal injunctive norms is associated with less norm-behavior inconsistencies.
3.2 Resolving Norm-Behavior Inconsistencies We tested different norm-based interventions regarding their effects on agents’ behaviors as well as their effects on norm-behavior inconsistencies. We operationalized norm-based interventions through manipulating one type of norm in all agents. Since one type of norm is represented through two action strengths, we manipulated both action strengths to the same degree in opposite directions (i.e. strengthening one action strength and weakening the other). We varied the strength of the manipulation and the number of repetitions (ranging from onetime interventions to interventions every few time steps). Interventions begin after 50 time steps, allowing agents to fully internalize a norm beforehand. The first 50 time steps were excluded from analyses. We tested the effects of social descriptive norm, social injunctive norm and personal injunctive norm interventions (see Fig. 1). Regarding the behavioral effects, both social norm interventions show only slightly effective in changing the different agents’ behaviors in different group constellations lastingly. The small effects in promoting cooperation or defection are achieved when social norms are intervened in the respective direction strongly and highly repeatedly. Concerning the effects on norm-behavior inconsistencies, both social norm interventions show inefficient in reducing inconsistencies. Interventions promoting defective behavior rather intensify them. Hence, the behavioral change motivated through social norm interventions may even increase norm-behavior inconsistencies. Interventions targeting agents’ personal injunctive norms show highly effective in promoting a specific behavior in
Explaining and Resolving Norm-Behavior Inconsistencies …
47
Fig. 1 Effects of social descriptive norm, social injunctive norm and personal injunctive norm interventions on average cooperation (figures on the left) and norm-behavior inconsistencies (figures on the right). Interventions vary along their direction and strength on the x-axis, ranging from strong interventions promoting cooperation (C) to strong interventions promoting defection (D). “None” represents the baseline where no intervention was applied. Interventions also differ in their frequency (y-axis), ranging from one-time interventions (“once”) to interventions repeated every 5 rounds. Average cooperation and norm-behavior inconsistencies are measured across agents, across 250 time steps and across 84 different agent group combinations
48
M. C. L. Batzke and A. Ernst
Fig. 2 Effects of personal injunctive norm interventions on average cooperation (left) and normbehavior inconsistencies (right). Interventions addressing personal injunctive norms to cooperate (x-axis) and personal injunctive norms to defect (y-axis) are applied independently from each other. “−” and “ + ” indicate interventions promoting either the rejection or acceptance of a personal injunctive norm. “0” represents no intervention. Average cooperation and norm-behavior inconsistencies are measured across agents, across 250 time steps and across 84 different agent group combinations. Interventions are repeated every 5 rounds
different agent types and group constellations lastingly. Moreover, personal injunctive norm interventions promoting cooperation reduce agents’ norm-behavior inconsistencies. The model suggests that rejecting the personal injunctive norm to defect and accepting the personal injunctive norm to cooperate leads to highly consistent behavior with the internalized norms, being cooperation. These results however leave unresolved which of the two is the driving force behind resolving inconsistencies: the rejection of the personal injunctive norm to defect or the acceptance of the personal injunctive norm to cooperate. Therefore, we conducted another experiment intervening both personal injunctive norms independently from each other (see Fig. 2). The model shows that especially the rejection of the personal injunctive norm to defect leads to consistency between agents’ internalized norms and their behavior.
4 Discussion The present paper aimed to provide a better understanding of norm-behavior inconsistencies. Whereas they are well known in empirical research [7], they have so far not been explored through social simulation. Experimental studies suggest that normbased interventions are partly effective in promoting behavioral change [26], yet mostly internal changes within people account for lasting behavioral changes [30]. Social simulation approaches can help gaining a better understanding of the relation between internal changes and behavioral changes and thus the mechanisms of norm-based interventions. The process of norm internalization is considered such a process of internal change, describing the adoption of normative beliefs into
Explaining and Resolving Norm-Behavior Inconsistencies …
49
the self [23], transforming the individual [37]. The DINO model represents a first simulation-based approach towards understanding inconclusive empirical relations between internalized norms and behaviors. It illustrates how personal injunctive norms develop and become behaviorally effective. It allows investigating the conditions of norm-behavior consistency and testing norm-based interventions regarding their ability to foster consistency.
4.1 Explaining and Resolving Norm-Behavior Inconsistencies In line with empirical research, DINO agents are—under certain circumstances— able to behave against their internalized norms. Especially non-cooperative groups increase norm-inconsistent behavior. Hence, cooperation of rather cooperative agents cannot be upheld in highly defective environments. Inconsistencies decrease across time, which suggests that cooperative agents assimilate to their environments over time and adapt their personal injunctive norms. This traces back to either agents’ complete rejection of both personal injunctive norms or their acceptance of the environment-matching personal injunctive norm. Either or, the DINO model suggests that norm internalization is highly dependent on the social environment (cf. [21]) and that defective environments spur like-minded behavior even contrary to internalized norms. Internal conditions that reduce inconsistencies are strongly differing personal injunctive norms. Thus, the simulation results lead to the assumption that reducing conflicts between personal injunctive norms could foster norm-behavior consistency and hence that the relative strength is of importance [25]. Testing different norm-based interventions, the DINO model illustrates the effects of social norm interventions. In line with research, social norm interventions are generally slightly successful in promoting a specific behavior. However, the behavioral change is not necessarily accompanied by internalization of the according norm. Hence, they do not foster norm-behavior consistency, but often rather increase inconsistencies, which makes them only effective when being repeated very often. Personal injunctive norm interventions are highly effective in promoting a behavior. Especially the rejection of the personal injunctive norm to defect is associated with normbehavior consistency. This traces back to the fact that a situation in which all agents act egoistically, DINO agents do not fulfill any goal. Hence, defection of all agents is a less stable state than cooperation of all agents (where at least the cooperative goal is met). Therefore, collective defection is less associated with accepting a personal injunctive norm and hence norm-behavior consistency in the DINO model. From a psychological perspective, it seems plausible that collective defection is unsatisfying for everyone; however, there are many real-world examples where collective defection goes a long way due to reasons that the simplicity of the present game-theoretical environment does not account for.
50
M. C. L. Batzke and A. Ernst
4.2 Limitations and Future Research This leads to one of the major constraints of the present agent-based model: the simplistic social dilemma environment. There are many limitations to the underlying game-theoretical application such as the lack of environmental influences (apart from the social ones), the duality of the behavioral options, the static payoff-matrix, etc., which put the applicability of the model results into question. On the one hand, the conditions may appear very artificial. On the other hand, they allow rigorously testing the conditions and isolated effects of norm internalization based on the incorporated factors, which still may illustrate fundamental dynamics. In future research, we aim to test the validity of the simulation results in different and more complex gametheoretical applications and increase the number of agents. Another relevant concern regarding the present work addresses the importance of investigating internal inconsistencies instead of merely focusing on behavioral change. Achieving high impact behavioral change in areas as pressing as climaterelated behaviors is of such high relevance that the underlying internal states may seem secondary. Moreover, the actual effectiveness of norm-based interventions can only be evaluated in context-specific experimental settings, so why simulate? Social simulation allows to test the direct effects of internal states on behavioral choices, which are impossible to investigate empirically. We believe that knowledge regarding the relation of internalized norms and behavior can help making normbased interventions more effective. Moreover, new leverage points for interventions can be deduced, such as reducing the acceptance of behavior-contradicting personal injunctive norms. Whereas research so far mostly focused on enforcing behaviorconsistent norms, this represents a potentially underrated intervention approach that could be tested empirically. An open question so far is how the acceptance or rejection of a specific norm can be achieved. In psychology, there are various approaches that seem promising in that respect (e.g. motivational interviewing, [32]). Simulation research should further investigate the facilitating conditions and factors for norm internalization.
4.3 Conclusion The DINO model represents a first simulation-based approach towards a better understanding of norm-behavior inconsistencies, allowing agents to behave contrary to their internalized norms. It illustrates the internal mechanisms of successful normbased interventions and suggests the importance of reducing conflicts between different internalized norms. Reducing norm-behavior inconsistencies is a promising means for fostering lasting behavior change.
Explaining and Resolving Norm-Behavior Inconsistencies …
51
References 1. Abrahamse, W., Steg, L., Vlek, C., Rothengatter, T.: A review of intervention studies aimed at household energy conservation. J. Environ. Psychol. 25(3), 273–291 (2005) 2. Agostinelli, G., Brown, J.M., Miller, W.R.: Effects of normative feedback on consumption among heavy drinking college students. J. Drug Educ. 25(1), 31–40 (1995) 3. Ajzen, I.: The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50, 179–211 (1991) 4. Andrighetto, G., Villatoro, D., Conte, R.: Norm internalization in artificial societies. AI Commun. 23(4), 325–339 (2010) 5. Asch, S.E.: Opinions and social pressure. Sci. Am. 193(5), 31–35 (1955) 6. Atkinson, J.W.: Motivational determinants of risk-taking behavior. Psychol. Rev. 64, 359–372 (1957) 7. Bamberg, S., Schmidt, P.: Incentives, morality, or habit? Predicting students’ car use for university routes with the models of Ajzen, Schwartz, and Triandis. Environ. Behav. 35(2), 264–285 (2003) 8. Bem, D.J.: Self-perception: an alternative interpretation of cognitive dissonance phenomena. Psychol. Rev. 74(3), 183 (1967) 9. Bicchieri, C.: The Grammar of Society: The Nature and Dynamics of Social Norms. Cambridge University Press (2006) 10. Bray, J., Johns, N., Kilburn, D.: An exploratory study into the factors impeding ethical consumption. J. Bus. Ethics 98(4), 597–608 (2011) 11. Cialdini, R.B., Reno, R.R., Kallgren, C.A.: A focus theory of normative conduct: recycling the concept of norms to reduce littering in public places. J. Pers. Soc. Psychol. 58(6), 1015–1026 (1990) 12. Cialdini, R.B., Demaine, L.J., Sagarin, B.J., Barrett, D.W., Rhoads, K., Winter, P.L.: Managing social norms for persuasive impact. Soc. Influ. 1(1), 3–15 (2006) 13. Dawes, R.M.: Social dilemmas. Annu. Rev. Psychol. 31, 169–193 (1980) 14. Deutsch, M.: Trust and suspicion. J. Conflict Resolut. 2(3), 265–279 (1958) 15. Epstein, J.M.: Learning to be thoughtless: Social norms and individual computation. Comput. Econ. 18(1), 9–24 (2001) 16. Festinger, L.: A Theory of Cognitive Dissonance. Stanford University Press (1957) 17. Fischbacher, U., Gächter, S.: Social preferences, beliefs, and the dynamics of free riding in public goods experiments. Am. Econ. Rev. 100(1), 541–556 (2010) 18. Fishbein, M., Ajzen, I.: Belief, Attitude, Intention, and Behavior: an Introduction to Theory and Research. Addison-Wesley (1975) 19. Henn, L., Otto, S., Kaiser, F. G.: Positive spillover: the result of attitude change. J. Environ. Psychol. 69, 101429 (2020) 20. Hines, J.M., Hungerford, H.R., Tomera, A.N.: Analysis and synthesis of research on responsible environmental behavior: a meta-analysis. J. Environ. Educ. 18(2), 1–8 (1987) 21. Hoffman, M. L.: Moral internalization: Current theory and research. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 10, pp. 85–133). Academic Press (1977) 22. Hoffman, M. L.: Empathy and Moral Development: Implications for Caring and Justice. Cambridge University Press (2000) 23. Kohlberg, L.: Essays on Moral Development. The psychology of moral development (Vol. 2). Harper & Row Publishers, Inc (1984) 24. Krupka, E., Weber, R.A.: The focusing and informational effects of norms on pro-social behavior. J. Econ. Psychol. 30(3), 307–320 (2009) 25. Lindenberg, S., Steg, L.: Normative, gain and hedonic goal frames guiding environmental behavior. J. Soc. Issues 63(1), 117–137 (2007) 26. Miller, D.T., Prentice, D.A.: Changing norms to change behavior. Ann. Rev. Psychol. 67, 339–361 (2016)
52
M. C. L. Batzke and A. Ernst
27. Murphy, R. O., Ackermann, K. A.: Explaining behavior in public goods games: how preferences and beliefs affect contribution levels (2013). Available at SSRN: https://ssrn.com/abstract=224 4895. https://doi.org/10.2139/ssrn.2244895 28. Murphy, R.O., Ackermann, K.A., Handgraaf, M.: Measuring social value orientation. Judgm. Decis. Mak. 6(8), 771–781 (2011) 29. Nerb, J., Spada, H., Ernst, A. M.: A cognitive model of agents in a commons dilemma, in Proceedings of the 19th annual conference of the Cognitive Science Society (pp. 560–565) (1997) 30. Otto, S., Kaiser, F.G.: Ecological behavior across the lifespan: Why environmentalism increases as people grow older. J. Environ. Psychol. 40, 331–338 (2014) 31. Piaget, J.: Piaget’s theory. In P. Mussen (Ed.), Carmichaels’ manual of child psychology (3rd ed., Vol. I, pp. 703–732). Wiley (1970) 32. Rollnick, S., Miller, W.R.: What is motivational interviewing? Behav. Cogn. Psychother. 23(4), 325–334 (1995) 33. Rozin, P.: The process of moralization. Psychol. Sci. 10(3), 218–221 (1999) 34. Ryan, R. M., Deci, E. L.: Self-determination Theory: Basic Psychological Needs in Motivation, Development, and Wellness. The Guilford Press (2017) 35. Sherif, M.: An experimental approach to the study of attitudes. Sociometry 1(1/2), 90–98 (1937) 36. Sunstein, C.R.: Nudging: a very short guide. Bus. Econ. 54, 127–129 (2019) 37. Vygotsky, L.: The genesis of higher mental functions. In J. Wertsch (Ed.), The concept of activity in Soviet psychology (pp. 147–188). Sharpe, Inc (1981). [originally published 1930] 38. Whitmarsh, L., O’Neill, S.: Green identity, green living? The role of pro-environmental selfidentity in determining consistency across diverse pro-environmental behaviours. J. Environ. Psychol. 30(3), 305–314 (2010)
Towards More Realism in Pedestrian Behaviour Models: First Steps and Considerations in Formalising Social Identity Nanda Wijermans
and Anne Templeton
Abstract Agent-based models of group behaviour often lack evidence-based psychological reasons for the behaviour. Similarly, pedestrian behaviour models focus on modelling physical movement while ignoring the psychological reasons leading to those movements (or other relevant behaviours). To improve realism, we need to be able to reflect behaviour as a consequence of feeling part of a psychological group, so we better understand why collective behaviour occurs under different circumstances. The social identity approach has been recognised as a way of understanding within and between group dynamics, as well as the processes that make an individual act as a group member. However, as promising the social identity approach is, the formalisation is a challenging endeavour since different choices can be made to reflect the core concepts and processes. We therefore in this paper elaborate on a few of these formalisation challenges and the choices we made. To support the formalisation and use of social identity approach and finally for the increased realism in group behaviour models, such as pedestrian models that are so heavily used to manage real world crowds. Keywords Psychological group · Agent-based modelling · Social identity · Self-categorisation theory · Group dynamics
1 Introduction Computational models of pedestrian behaviour primarily focus on physical movement in physical space. For example, through obtaining more realistic speeds [1], or navigation through the environment (e.g. social force, optimal steps models). Models of pedestrian behaviour in groups have ranged between observing walking
N. Wijermans (B) Stockholm University, Kräftriket 2B, 10691 Stockholm, Sweden e-mail: [email protected] A. Templeton University of Edinburgh, 7 George Square, Edinburgh E8 9JZ, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_5
53
54
N. Wijermans and A. Templeton
formation of group members [2], or the impact of group size on evacuations [3] and route-choice. However, behaviour encompasses more than (physical) movement which can be extremely important for the realism of pedestrian models. How we move is often a result of being in a social context, which can make pedestrian movement in crowds unexpected when only regarded from a physical point of view. Research from social psychology suggests that group dynamics are crucial to understand collective behaviour such as why members will put themselves in danger to help others [4], helping strangers during a terrorist attack or fire in a discotheque, or protest in a location that has symbolic meaning [5, 6]. The group dynamics underpinning the collective behaviour in these examples are inherent to many crowd situations and are extremely important to understand why collective behaviour occurs in a range of contexts. However, little research in pedestrian models has focused on why collective behaviour occurs in groups. To increase their realism, we argue that models of group behaviour should attend to the psychological underpinnings of how collective behaviour occurs between group members, and how two distinct psychological groups interact when in the same physical space. For example, theories of group behaviour are needed to explain why group members congregate together when there is physical space available around them, and why the proximity between group members increases when in the presence of another group [7]. A key challenge for those formalising social theory is that one needs to become more precise what a theory or explanation means, what causal relations need to be specified and one needs to make sure that the model is complete and coherent [8]. To make the theory a functional part of the agent-based simulation, the modeller is faced with gaps that allow for multiple interpretations and the mere act of adding or specifying changes to original description. Even though this is a strength of ABM as it contributed to theory development, the challenge remains: how do we fill the gaps? Our approach is to co-create these additions together as agent-based modeler and psychologist specialised in SIA. Not just to avoid the modeller trap of not understanding or appreciating the theory as intended, but also to engage in designing the parts of the theory that doesn’t specify in ways one cannot do alone. In this paper we make steps towards operationalizing core aspects of intra and intergroup behaviour in pedestrian movement. We draw from theoretical aspects of the social identity approach [9], and previous research on the role of group processes in pedestrian movement. Through the paper, we demonstrate our rationale for selecting necessary aspects of the theory, and our methodology for operationalising the theory based on previous research in two case study scenarios of pedestrian behaviour in intragroup and intergroup scenarios. Our aim is to enable deep discussions about formalising SIA as well as supprting others to formalise SIA in their respective domains or projects.
Towards More Realism in Pedestrian Behaviour Models: First Steps …
55
2 Social Identity Approach: The Basics and Pedestrians The social identity approach is a prominent approach within social psychology to explain collective behaviour. It has been applied to understanding collective behaviour at mass gatherings [10], evacuations and disasters [4, 11], and event safety management [12]. It has been highlighted as a core social theory that should be used or at least in reach for social simulators to improve the realism of simulated group behaviour (for examples see, [7, 13, 14]. The social identity approach consists of social identity theory [15] and selfcategorisation theory [16]. Social identity theory posits that people have multiple cognitive concepts of the self, including both personal and social identities. Personal identities refer to individual-level idiosyncratic identities. Social identities refer to our membership in social groups. For example, one may have a social identity as a computer modeller or as a social psychologist. Previous research using social identity theory suggests that group members tend to have more favourable opinions of ingroup members (those in the group) compared to outgroup members (those outside the group). One of these social group memberships can be a member of a psychological crowd [17] where people in the crowd share a sense of belonging to the same group. This is different to a physical crowd which is composed of individuals or small subgroups who happen to be in the same physical space but without a sense that they are joined in a meaningful group. Having higher social identification (feeling more strongly like a group member) with others in the crowd can influence perceptions and behaviour such as feeling safer in close proximity with others [18], and wanting to be in more central, denser areas of crowds [19]. Self-categorisation theory explains how the self and others are categorised into groups. A person takes on a social identity through a cognitive transformation of depersonalisation, wherein their social identity becomes salient and they see others in their group as more similar to themselves than those outside of the group. The salience of a social identity can change depending on the context, and the meta-contrast ratio is particularly relevant for understanding the dynamic nature of intergroup relations. The meta-contrast ratio states that the salience of a social identity can increase when in the presence of an outgroup because the perceived differences between the ingroup members are less than the perceived differences between the ingroup and outgroup members. Thus, social identification with the ingroup can increase in the presence of an outgroup, and this can increase the effect of the group membership on behaviour. This can be seen in pedestrian behaviour where ingroup members move into closer proximity when in the presence of an outgroup compared to when they were walking without an outgroup present [20] (Fig. 1). Crucially for pedestrian modelling, research from social psychology has shown that proximity to others is influenced by group relations. People who see others as ingroup members choose to be physically closer to them than outgroup members, and exhibit higher behavioural coordination across a range of scenarios including emergencies [21] and when walking together [20]. For example, Novelli, Drury and Reicher [22] demonstrated that we are more willing to sit closer to ingroup members.
56
N. Wijermans and A. Templeton
Fig. 1 Overview of core concepts and mechanisms/processes that SIA used to explain individuals as part of a group behaviour
Alnabulsi and Drury [18] demonstrated that the more pilgrims at the Hajj felt others in the crowd were in the same group as them, the safer they felt. Results by Novelli et al. [19] suggest that the more festival-goers saw others in the crowd as being in the same group, the more motivated they were to go to denser, more central areas of a crowd because it was associated with a positive experience. It is notable that in these studies social identification is not treated as a binary, i.e., it is not on or off. Instead, the strength of identification exists on a continuum where higher strength of social identification with the group is associated with stronger effects on perceptions and behaviour. Despite the breadth of research into how social identity processes influence group behaviour, pedestrian models have primarily focused on issues of topics such as obstacle avoidance, the role of group size on evacuations, and obtaining realistic heterogeneity of pedestrians. Very few models have addressed the theoretical underpinnings of what makes a group, nor have they incorporated principles of the social identity approach into their models. To increase the realism of pedestrian models, we set out our formalisation of different aspects of the social identity approach, and we lay out the challenges and decisions throughout the process of incorporating theoretical principles into an agent-based model (Table 1).
3 Social Identity Approach Formalisation The SIA-PED model is a pedestrian model in which aspects of the social identity approach are formalised to increase social realism on agent level and thereby the actual movement dynamics of pedestrians to advance crowd evacuation research and management. Inspired by behavioural experiments [23] on the role of social identity
Towards More Realism in Pedestrian Behaviour Models: First Steps …
57
Table 1 Overview core social identity approach concepts we highlight in this paper with a short description adapted from [7] SIA concept
Description
Personal identity
One’s distinct individual characteristics and qualities
Social identity
Cognitive self-representation as a member of a social group
Salience
The extent to which a social identity is cognitively present at a particular time
Meta-contrast ratio
When differences between people in the ingroup are perceived as smaller than the differences with the outgroup
Social identification Refers to how much one identifies as a member of a particular social group
on pedestrian movement, we adopt a similar experimental design with SIA-PED to explore: • The difference of an individual being part of a physical versus psychological group on the movement of pedestrians walking in the same direction (flow | intra-group | top Fig. 2); and • How the presence of another group affects the movement (counter-flow | intergroup | Bottom Fig. 2). In the SIA-PED ABM we formalise the relevant aspects of SIA to represent a psychological group and corresponding influences on their behaviour depending
Fig. 2 The contextual setup we target with SIA-PED to formalise the social identity approach and explore the consequences of including one (top) or two (bottom) physical versus psychological groups
58
N. Wijermans and A. Templeton
on whether they are walking alone or in the presence of another group walking in the opposite direction (counterflow). In SIA terms, for an agent to be part of a psychological group it has to have a salient social identity and describe how a salient identity affects behaviour. When the social identity is salient, then—depending on the degree of social identification / importance of that social identity—the behaviours that are considered appropriate are related to being part of a group, in we-terms. When including the presence of another group, perceiving this group as a psychological (important) group increases the social identification with one’s own group via the meta-contrast ratio. While just restricting ourselves to this simple experimental design, already several core formalisation decisions need to be concerned with what concepts are and what they do. In the remainder of this section we will zoom in on some of these core formalisation decisions.
3.1 Formalisation 1: Identity Representation Decision From SIA, and specifically the social identity theory [15], we learn that we have a personal identity and social identities. The personal identity reflects aspects that characterise a person (in distinction of others), which is a matter of reflecting context relevant attributes on the individual level. Social identity however, conceptually reflects a group membership and its connected understandings of what it means to be part of a particular group (behaviours, appearances etc.). There are different ways imaginable to formalise having a personal and social identity. Whereas a personal identity can be imagined easily as something that is captured within an agent, the social identity is conceptually part of the individual and the group, it is a relational aspect of the self. How to represent this concerns a decision that balances pragmatism in what is easiest to programme and what is truest to your interpretation of the concept, that is mediated by the aim and research question addressed by the model. Personal identity is characterised as agent attributes. The variable ID (personal identity) in combination with IDsalience is used to identify which identity is dominant (personal versus social). However, in relation to behaviour, there may be a subset of agent variables related to the personal identity. In our pedestrian context, this could be reflected as a preferred walking speed variable. The social identity on the other hand is reflected by an agent having a link with a group, making the social identity a relational representation. This means that the model distinguishes pedestrian and group agents. The attributes of the group agent reflect those variables and actions that represent the group, which assumes consensus on what it means to be in a group or just what is common knowledge. These representation choices are summarised in class diagram in Fig 3. The biggest consideration lies in how to represent the social identity, is it part of the agent itself (individual) or not? We chose to reflect the social identity as a relational for two reasons:
Towards More Realism in Pedestrian Behaviour Models: First Steps …
59
Fig. 3 Class diagramme of personal and social identity formalisation
1.
2.
Conceptual: it feels more reflective of the way social identity is described, as neither or both part of the individual and the group. It is something that connects an individual with a group, the group has certain characteristics, but merely exists through the connections of the individuals. Practical: it is a straightforward way to represent common characteristics and behavioural options that are known to all group members, but at the same time allow for changes to emerge in these characteristics and behaviours over time, this dynamism is important for the realism of such models as new norms may arise, e.g. helping behaviour during emergencies.
3.2 Formalisation 2: Salience versus Social Identification Another important mechanism in SIA describes that when one’s social identity is salient one acts accordingly. It is thus more likely that one displays behaviour considered appropriate in the group. But before getting into how this affects behaviour (Formalisation 3), we focus on representing salience and social identification, two important concepts when one goes deeper into SIA. Recall that salience plays a role in which social identity is ‘activated’ or influencing behaviour at a certain moment.
60
N. Wijermans and A. Templeton
The context/situation makes one identity more or less salient, and strongly links to mechanisms as the ‘meta-contrast ratio’ discussed in the background. The metacontrast ratio plays a role in how one categorises oneself through comparison with others. The current salience of identities are important in how this assessment is made. Social identification on the other hand, plays a role in how a salient social identity affects behaviour. It indicates the prominence of a salient social identity, i.e. how important this identity is to you, in how far you identify with the group and how much your behaviours are aligned with the group. To represent these concepts and the process towards behaviour we had several iterations of discussions and revisiting the literature to interpret and stressed the following: • Social identities and personal identity can be seen as ends of a continuum, where the degree of social identification reflects how important that identity is to you. • Salience determines which social identity is influential at that moment, and the context makes the identity salient (e.g., through the meta-contrast ratio). We show the conceptualisation and connection of salience and social identification of social identities in Fig. 4. Formalising this conceptualisation is done using two variables (salience and identification) that reside in the relation (link) between the agent and the social group. To determine which identity is salient requires the comparison between the salience of all links and the choice of representing how social identities influence behaviour. We reflect that only one identity (the max salience relation) will influence behaviour, which is sufficient for the purpose we have in the model now. However, this representation also allows for reflecting a more complex take on salience when considering ‘social identity complexity’ that questions the idea of one salient social
Fig. 4 Conceptualisation of salience and social identification. Salience indicates which social identity is active or influencing at a certain moment. Social identification is the degree of importance reflected as a continuum between personal and social identity
Towards More Realism in Pedestrian Behaviour Models: First Steps …
61
identity and allows for the role of multiple, conflicting social identities affecting behaviour [24]. Something we or others may want to unpack in the future.
3.3 Formalisation 3: Salient Identity → Behaviour As described above the theory describes that a salient identity makes it more likely that one displays behaviour considered appropriate in the group. To make it concrete for our case, when the social identity is salient, the pedestrians tend to seek closer vicinity of others of their group, while walking forward. Often this is formalised as a flick of a switch, social identity is on/off and there is a direct relation to a particular behaviour [25]. From a modelling perspective this choice is understandably pragmatic, however for most real-life situations extremely simplified as this would mean that anyone who feels part of the group acts in one certain way. This is where the role of strength of social identification—the degree to which this social identity is important to you—can come in. For our model, we decided to make the behaviour more heterogeneous by making the agent try to stay together (affecting their walking speed, closeness and direction of movement) depending on the degree of social identification (Fig. 5). Here we distinguish between low, medium and high levels of social identification, making the influence more granular. Although this is a decision in line with the empirical findings, of seeking closeness with one group when the identity is salient and the increase of closeness when social identification is higher, we still feel the linearity, determinism and heterogeneity in behaviours deserves more reflection, discussion and empirical insights on the processes that lead to adopting certain behaviours.
Fig. 5 Visualises the connection between the personal and social identity to typical behaviours when that identity is salient, however the pull one receives from being close to others relies on the level of social identification
62
N. Wijermans and A. Templeton
4 Discussion and Conclusion In this paper we highlight some core conceptual-to-code decisions and reasonings in modelling Social Identity Approaches (SIA). SIA is considered a high potential approach to explain many in and between group dynamics relevant to many social science inquiries. For agent-based social simulation a valuable ability to contribute and connect to and between social science domains when having such wide explanations formalised. It is something that has been picked up by many and has over the last years gained momentum as a shared focus to formalise (e.g. siam-network.online). Preliminary reviews also show that often SIA formalisations reflect very different ways of interpreting SIA mechanisms [26]. In our collaboration, being an agentbased social simulation modeller and a psychologist specialised in SIA, we seek to formalise SIA for a specific domain (pedestrian crowd models), however our considerations and decisions can be helpful to others applied to their own case/domain. For that reason, we shared our decision and reasoning about representing identity (personal and social), salience, social identification and finally the way social identity influences behaviour. We have only just started to tackle the many aspects of SIA and are very much aware there is so much more that can and should be unpacked. Our immediate next step will for instance focus on the role of the presence of an outgroup. How this influences behaviour via perceiving an outgroup increases social identification with one’s own group via the meta-contrast ratio. But also, how there can be a more specific description of reinforcing mechanisms of salience and social identification. Are there slow and fast changes distinguishable? For instance, in how important a social identity is to you overall, but may get higher in the moment and go back to a baseline after an event/incident? Etc. At the same time we are keen on having a discussion on this level of detail with peers during the conference, and are open to reconsider or refine what we have until now. We hope to have enabled a conversation and provide support in formalising SIA. We feel that our positionality as interdisciplinary scientists in psychology and modelling, engaging with SIA and ABMs gives us a unique position to push the formalisation better when joining forces. This does not mean this is THE way to formalise, others will and may do this differently, however benefits anyone interested in formalising SIA, be it for their own work or to be critical of ours/others.
References 1. Moussaïd, M., Helbing, D., Theraulaz, G.: How simple rules determine pedestrian behavior and crowd disasters. Proc. Natl. Acad. Sci. 108, 6884–6888 (2011). https://doi.org/10.1073/ pnas.1016507108 2. Vizzari, G., Manenti, L., Ohtsuka, K., Shimura, K.: An agent-based pedestrian and group dynamics model applied to experimental and real-world scenarios. J. Intell. Transp. S 19, 32–45 (2014). https://doi.org/10.1080/15472450.2013.856718
Towards More Realism in Pedestrian Behaviour Models: First Steps …
63
3. Turgut, Y., Bozdag, C. E.: Modeling pedestrian group behavior in crowd evacuations. Fire Mater. (2021). https://doi.org/10.1002/fam.2978 4. Drury, J., Cocking, C., Reicher, S.: Everyone for themselves? A comparative study of crowd solidarity among emergency survivors. Brit. J. Soc. Psychol. 48, 487–506 (2009). https://doi. org/10.1348/014466608x357893 5. Drury, J., Reicher, S., Stott, C.: Transforming the boundaries of collective identity: from the ‘local’ anti-road campaign to ‘global’ resistance? Soc. Mov. Stud. 2, 191–212 (2010). https:// doi.org/10.1080/1474283032000139779 6. Stott, C., Reicher, S.: Mad Mobs and Englishmen?: Myths and realities of the 2011 riots. Constable and Robinson, London, UK (2011) 7. Templeton, A., Neville, F.: Modeling collective behaviour: insights and applications from crowd psychology. In: 5th ed. Springer International Publishing, pp. 55–81 (2020) 8. Sawyer, R.K.: Social explanation and computational simulation. Philos. Explor. 7, 219–231 (2004). https://doi.org/10.1080/1386979042000258321 9. Reicher, S.D., Spears, R., Haslam, S.A.: The Social Identity Approach in Social Psychology, pp. 45–62. SAGE Publications Ltd (2010) 10. Hopkins, N., Reicher, S.: Mass gatherings, health, and well-being: from risk mitigation to health promotion. Soc. Iss. Policy Rev. 15, 114–145 (2021). https://doi.org/10.1111/sipr.12071 11. Cocking, C., Drury, J.: Talking about Hillsborough: ‘Panic’ as discourse in survivors’ accounts of the 1989 football stadium disaster. J. Community Appl Soc. 24, 86–99 (2014). https://doi. org/10.1002/casp.2153 12. Drury, J., Carter, H., Cocking, C., et al.: Facilitating collective psychosocial resilience in the public in emergencies: twelve recommendations based on the social identity approach. Front. Public Health 7, 141 (2019). https://doi.org/10.3389/fpubh.2019.00141 13. Adrian, J., Bode, N., Amos, M., et al.: A glossary for research on human crowd dynamics. Collective Dyn. 4, A19–13 (2019). https://doi.org/10.17815/cd.2019.19 14. Templeton, A., Drury, J., Philippides, A.: From mindless masses to small groups: conceptualizing collective behavior in crowd modeling. Rev. General Psychol. 1–16 (2015). https://doi. org/10.1037/gpr0000032 15. Tajfel, H., Turner, J.C.: An integrative theory of intergroup conflict. In: Worchel, S. (ed.) Austin WG, pp. 33–47. Brooks/Cole, Monterey (1979) 16. Turner, J.C., Hogg, M.A., Oakes, P.J., et al.: Rediscovering the Social Group: A SelfCategorization Theory. Basil Blackwell, Oxford (1987) 17. Reicher, S.D.: Mass action and mundane reality: an argument for putting crowd analysis at the centre of the social sciences. Contempor. Soc. Sci. 6, 433–449 (2011). https://doi.org/10.1080/ 21582041.2011.619347 18. Alnabulsi, H., Drury, J.: Social identification moderates the effect of crowd density on safety at the Hajj. Proc. Natl. Acad. Sci. 111, 9091–9096 (2014). https://doi.org/10.1073/pnas.140495 3111 19. Novelli, D., Drury, J., Reicher, S., Stott, C.: Crowdedness mediates the effect of social identification on positive emotion in a crowd: a survey of two crowd events. PLoS ONE 8, e78983 (2013). https://doi.org/10.1371/journal.pone.0078983 20. Templeton, A., Drury, J., Philippides, A.: Walking together: behavioural signatures of psychological crowds. Royal Soc. Open Sci. 5, 180172–180214 (2018). https://doi.org/10.1098/rsos. 180172 21. Drury, J., Brown, R., González, R., Miranda, D.: Emergent social identity and observing social support predict social support provided by survivors in a disaster: solidarity in the 2010 Chile earthquake. Eur. J. Soc. Psychol. 46, 209–223 (2016). https://doi.org/10.1002/ejsp.2146 22. Novelli, D., Drury, J., Reicher, S.: Come together: two studies concerning the impact of group relations on personal space. Brit. J. Soc. Psychol. 49, 223–236 (2010). https://doi.org/10.1348/ 014466609x449377 23. Templeton, A., Drury, J., Philippides, A.: Placing large group relations into pedestrian dynamics: psychological crowds in counterflow. Collect Dyn. 4, 1–22 (2020). https://doi.org/ 10.17815/cd.2019.23
64
N. Wijermans and A. Templeton
24. Roccas, S., Brewer, M.B.: Social identity complexity. Pers. Soc. Psychol. Rev. 6, 88–106 (2007). https://doi.org/10.1207/s15327957pspr0602_01 25. von Sivers, I., Templeton, A., Künzner, F., et al.: Modelling social identification and helping in evacuation simulation. Safety Sci. 89, 288–300 (2016). https://doi.org/10.1016/j.ssci.2016. 07.001 26. Scholz, G., Eberhard, T., Ostrowski, R., Wijermans, N.: Social identity in agent-based models— Exploring the state of the art. In: Advances in Social Simulation, Proceedings of the 15th Social Simulation Conference. pp. 59–64 (2021)
Developing a Stakeholder-Centric Simulation Tool to Support Integrated Mobility Planning Diego Dametto , Gabriela Michelini , Leonard Higi , Tobias Schröder , Daniel Klaperski , Roy Popiolek , Anne Tauch , and Antje Michel Abstract Simulation tools aimed at enhancing cross-sectoral cooperation can support the transition from a traditional transport planning approach based on predictions towards more integrated and participatory urban mobility planning. This shift entails a broader appraisal of urban dynamics and transformations in the policy framework, capitalizing on new developments in urban modelling. In this paper, we argue that participatory social simulation can be used to solve these emerging challenges in mobility planning. We identify the functionalities that such a tool should have when supporting integrated mobility planning. Drawing on a transdisciplinary case study situated in Potsdam, Germany, we address through interviews and workshops stakeholders’ needs and expectations and present the requirements of an actionable tool for practitioners. As a result, we present three main challenges for participatory, simulation-based transport planning, including: (1) enhancement of the visioning process by testing stakeholders’ ideas under different scenarios and conditions to visualise complex urban relationships; (2) promotion of collective exchange as means to support stakeholder communication; and (3) credibility increase by early stakeholder engagement with model development. We discuss how our participatory modelling approach helps us to better understand the gaps in the knowledge of the planning process and present the coming steps of the project. Keywords Sustainable urban mobility planning · Integrated planning approach · Participatory modelling
D. Dametto (B) · G. Michelini · L. Higi · T. Schröder · D. Klaperski · R. Popiolek · A. Tauch · A. Michel Fachhochschule Potsdam, Institut Für Angewandte Forschung Urbane Zukunft, Kiepenheuerallee 5, 14469 Potsdam, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_6
65
66
D. Dametto et al.
1 Introduction1 Challenges to city life brought about by socio-demographic development and technology innovation are disruptors of established governance processes and require a more integrated perspective to find adequate solutions. Under those conditions, stakeholders involved in urban and mobility planning are increasingly confronted with high levels of uncertainty. Tools to understand the complexity of urban life and to envision common goals among the actors involved are key to supporting urban planning. However, the adoption of such tools in planning practice is limited due to their poor fit to users’ needs and expectations [1], to policy conflicts and administrative constraints [2], as well as to their low usability and poor user experience [1, 3]. In this paper, we focus on participatory modelling as a pathway to overcome these limitations and assist stakeholders when facing complex and uncertain urban decisions. Rather than at predicting the future state of the system, participatory modelling is more aimed at understanding complex issues, envisioning future options, and assisting and training stakeholders [4]. Drawing on a transdisciplinary case study situated in the North of the city of Potsdam, Germany, the project SmartUpLab develops a tool for integrated urban mobility planning based on a co-created agent-based model that integrates mobility data, GIS data, as well as implicit stakeholder knowledge (in the following: the tool), embedded in supporting participatory methods (in the following: the methods). By stakeholders, we refer to individuals, groups or organisations (possibly representing citizens) that are affected by and/or able to affect sustainable urban mobility planning. In this context, this paper addresses the question about the functionalities that such a tool should have in order to support practitioners in the integrated mobility planning. By conducting expert interviews and participative workshops, we identified role-specific needs and expectations of the stakeholders regarding urban mobility models as well as administrative, political and cultural barriers to their adoption. We describe the partial results of our research and define the main guidelines for our tool development as well as the underpinnings for our participatory methods. In the following paragraphs we present an overview of the current state of the modelling research in relation to mobility planning and its policy framework, followed by a summary of our project goals, methodology and case study. We introduce the results and key takeaways for the development of an actionable tool for practitioners under the form of three core challenges. Lastly, we discuss the next steps of the project including the translation of these challenges into services and features.
1
We would like to thank the reviewers for their thoughtful comments and efforts towards improving our manuscript. We have addressed the suggestions in the paragraphs 2, 4, 5 and 6.
Developing a Stakeholder-Centric Simulation Tool to Support …
67
2 Tools and Methods for Integrated Urban Mobility Planning Planning and modelling communities push for a new generation of transport models able to extend the scope of transport planning beyond traffic flow assessments and capture the interactions between land use and mobility [5–7], and between mobility demand and its psychological and social determinants [8]. Rather than delivering predictions as in the ‘predict and provide’ approach [9], the new generation of models is supposed to support planners and stakeholders when exploring different scenarios, fostering their capacity for conditional thinking [10]. This approach contrasts with concepts of big data, “digital twin cities” and “smart cities” that by building on a seemingly neutral technological objectivity divert attention from how strategy-making is actually conducted in practice [11]. Understanding models as “representations of shared knowledge” [12], the ComMod paradigm provides a conceptual framework to address strategy-making as a dynamic, socially embedded and interactive process [13] and, by doing this, developing more communicative, trans-disciplinary and learning-oriented tools that enact the debate and facilitate the integration of different forms of local knowledge, political opportunities and constraints together with broad urban-regional dynamics [14, 15]. This new generation of models bolster a shift from traditional transport planning towards an integrated mobility planning approach, which focuses on envisioning futures for city development while looking into the inherent dynamics of the city. Integrated urban mobility planning constitutes therefore a third path to top-down planning approaches that impose a configuration over the city dynamics, and bottomup ones that build on an extensive monitoring system and conceive planning as an adaptive response to changes [16, 17]. In this context, the “Guidelines for developing and implementing a Sustainable Urban Mobility Plan” (SUMP) [18] provide a framework that places the participation of stakeholders and citizens as the cornerstone of the planning process.2 However, as discussed in this paper, more needs to be done in order to involve stakeholders actively both in the modelling and in the digital tool-based planning process itself. It requires a stakeholder-centric approach to tool development and the definition of an ancillary set of participatory methods suited to the use case. SUMP organises the planning process into four main phases. After an initial (1) preparation and analysis phase that leads to the identification of problems and opportunities, it fosters an (2) integrated approach to strategy development, coordinating a wide range of stakeholders to develop a common vision. Successively, (3) the operative phase starts, and the measures are evaluated and selected. Lastly, the (4)
2
Insofar at the European level, almost 830 SUMPs have been finalized and 100 more are under preparation [19] and in Germany, most of municipal transport strategies are oriented to the SUMP [20]. This trend is expected to continue as the European Court of Auditors suggested to make the existence of SUMP a mandatory requirement to access EU funding for urban mobility investments [19].
68
D. Dametto et al.
implementation of measures takes place next to a systematic monitoring system that allows for signal detection and real-time analysis. While many modeling platforms such as MATSIM [21] and SUMO [22] are available for forecasting future traffic flows and analytically evaluating different mobility measures, few tools have been developed for systematically addressing stakeholder coordination. Previous studies [23–25] show that digital tools for participatory city development can also be successfully deployed to encourage the discussion process, to illustrate the complexity of different scenarios of urban development and to make these scenarios comprehensible through visualisation. Agent-based modelling (ABM) allows for defining specific behavioural rules of individuals constituting complex systems on a micro-level and investigating resulting effects and characteristics on a meso- and macro level, both in abstract and spatially explicit modelling environments [26]. Participatory ABM approaches that involve stakeholders at different stages of the modelling process and model interaction pursue a wide range of resulting advantages [12]: amongst others improving the models’ adequacy by integrating stakeholders’s input and generating a common understanding of complex systems and challenges among the participating stakeholders by co-creating relevant parts of the model. Due to the characteristics of mobility systems comprising heterogeneous sets of actors with specific behavioural rules in spatial environments and resulting patterns and phenomena, communicative urban mobility planning represents an appropriate field of application for participatory ABM approaches.
3 Previous Work Any planning process faces a fundamental tradeoff between the need for assessments and well-founded evaluations with realistic predictions and the danger of being excessively time-consuming, costly and difficult to understand [27]. Addressing the resulting requirements, a frugal approach [28, 29] has been chosen in previous research at the authors’ Institute, resulting in the development of a participatory modelling toolbox for small and mid-sized towns in the project PaSyMo [30]. A PaSyMo-toolbox prototype comprising agent-based models, survey tools, a serious gaming concept and a mobile interactive simulation table was tested in the town of Eberswalde and in the neighborhood of Schlaatz, Potsdam, Germany. Observations included an increased motivation of participants to take part in communicative planning formats. It also provided a better overview of the social networks and their dynamics within neighborhoods undergoing re-development. The participants expressed the need to integrate more empirical data for model validation, which was not possible due to the small extent of the workshops implemented [30]. Ongoing research based on the PaSyMo-toolbox focuses on future population development in the town of Luckenwalde, Germany, involving the local planning authority and other stakeholders [31].
Developing a Stakeholder-Centric Simulation Tool to Support …
69
Building on the results of these previous participatory modelling studies, we aim to identify a ‘happy medium’ [27] for our tool development, balancing (a) complexity and intelligibility and (b) discursive strategy development and quantitative assessment. Our goal is to develop a stakeholder-centric and user-oriented tool that empowers social actors to collectively envision paths for urban development by focusing on the discursive dimensions of the planning process.
4 Materials and Methods 4.1 Case Study: Bornstedt, Potsdam Our case study is located in the neighborhood of Bornstedt, in the North of the city of Potsdam, Brandenburg, Germany. By means of statistical data and official documents, as well as by exchange with other research projects addressing mobility issues in the same geographical area [32, 33], we identified the neighborhood’s main mobility-related challenges. After a decline and demographic stagnation as a consequence of the German reunification, the capital state of Potsdam is now rapidly growing [34]. Like other neighborhoods, Bornstedt is booming and suffering from the effects of fast-encroaching urban sprawl and drive-through lanes. Geographical constraints and the historical development of this suburban residential area translate today into a last-mile problem for public transport [34]. In addition, it is expected that in the coming years a household transition from families-with-children to aged-couples will take place, increasing demotorization and transforming mobility behavior [35]. These changes will result in new requirements for planning to meet the sustainability goals.
4.2 Sprint and Agile Project Management The main methodological challenge for a participatory modelling project consists in the interaction with the process specific to the system. Modelers must commit to follow the needs and proposals of the stakeholders and to accept the dynamic of the social process encompassing the participatory effort [36]. For this purpose, we combine our transdisciplinary case study with an agile project management approach. By structuring the development over several sprints, this approach allows an early implementation of feedback, a stakeholder-oriented goal setting and the right project design for experimenting with different paths and solutions [37]. Each sprint is conceived of as a cyclical process with a concrete outcome (i.e. the definition of a scenario or of a participatory method or the development of a prototype) that nonetheless is embedded in an open research process.
70
D. Dametto et al.
4.3 Tool and Data Warehouse To enable early stakeholder feedback, we developed an alpha version of the model covering two main planning issues: land use and mobility. We used the GAMA platform, which allows for a quick integration of GIS and OSM data into agent-based models (ABM) and provides an interactive simulation environment [38]. Moreover, we integrated previous developments of the PaSyMo project (see above) and the GAMA community itself [39]. As a first step, we set up a data warehouse including relevant mobility and demographic data. We identified key areas where information was missing and applied GIS-based procedures and data analysis techniques to close the gaps, i.e. for estimating potential parking spaces and school demand [40]. By doing this, we developed a set of features to be manipulated by the stakeholders while inspecting and testing mobility concepts under different scenarios. The use of ABMs offers possibilities that go beyond dynamic data visualisation: end-users can adjust model parameters, integrate typical mobility patterns and hence define an environment for experimenting, visualising and communicating new propositions and exploring future scenarios in real-time. Moreover, since the GAMA Platform is not specifically designed for end-users [41], we developed a user-centered interface with the specific parameters and indicators defined together with our stakeholders (Fig. 1).
Fig. 1 The alpha-version of the interface of the SmartUpLab City Tool (Source own research and development)
Developing a Stakeholder-Centric Simulation Tool to Support …
71
4.4 Expert Interviews and Stakeholder Workshop In order to support the planning process, we engaged with relevant stakeholders holding key roles, skills and expertise in urban mobility planning of the North of Potsdam. Following Barreteau et al. [36], we view the tool as a boundary object that provides a basis for discussion and enhances interactions between stakeholders as well as between stakeholders and researchers. Therefore, each participative instance is addressed as an opportunity for prototype co-creation. We first conducted nine expert interviews [42] with decision-makers, city planners, researchers, data providers, mobility and public transport providers. We applied a flexible interview guideline3 and then conducted a qualitative data analysis based on Grounded Theory [43], supported by the Software Atlas.ti 9 [44]. With the aim to increase inter-observer correspondence, a team of two researchers were involved in the coding process. Both researchers undertook separately a first round of open coding and then built the codebook by means of consensus meetings. Through iterative inductive-abductive rounds of open, focused and theoretical coding, we were able to integrate the challenges, needs and requirements present in the experts’ discourse into the design of our participatory modelling approach. The interviews also provided us with insight into the general planning competences that our model should support, which translated into the specification of the original generic prototype. Following the interview analysis, we organised an online stakeholder workshop in January 2021. A group of seven representatives from mobility research, private and public transport providers and energy companies engaged in a two-and-a-halfhour long exchange. The first part of the workshop was aimed at validating interview results by identifying common challenges in order to increase group awareness and foster cohesion. In the second part of the workshop, we presented the alpha version of the model following interview findings and examined it in the group by means of a structured discussion. Further feedback was obtained through a closed questionnaire. For prototype improvement, the workshop provided the specific topic as well as the validation of the geographic area to be covered by the model. Later, in exchange rounds with experts of the public transport provider, a joint STEEP-Analysis delivered the relevant factors influencing the agents behaviour and the key indicators for the simulation.
3
The interviews took place both in-site and on-line between June and July 2020, according to personal preferences and the restrictions of the COVID-19 pandemic. The conversations took place in German language and the transcripts quoted in this paper are our own translation.
72
D. Dametto et al.
5 Preliminary Results: Identifying the Key Challenges Through the expert interviews, workshops and exchange rounds, we gained insight into the current state of the planning process in the city of Potsdam and we identified within the experts’ discourses three major planning challenges that our tool should tackle, as described in this section.
5.1 Challenge 1: Enhancing Collective Visioning The visioning process that confronts different points of views with future scenarios mentioned as the second phase of SUMP, is in practice rarely developed and validated collectively. Most cities do involve stakeholders, but the degree of involvement can vary significantly, and it is usually limited to the diagnosis and analysis of mobility problems [45]. This pattern seems to match the case of Potsdam: the experts declared that the long-term strategy objectives are defined top-down. Except for the political decision-makers and the city planners, no other stakeholder participated in the preparation of the current urban development concept for transport (StEK-Verkehr) [46, 47] and city planners reinforced that their role there is to offer analysis to the administrative and political levels. Meanwhile, the representative of the public transport company considered their participation in the preparation of the local transport plan (Nahverkehrsplan) as rudimentary [48]. Beside this local transport plan, the policy portfolio of Potsdam includes many different sectoral plans. However, the limited integration of stakeholders in the definition of the long-term goals raises difficulties in the coordination of the sectoral plans. This turns the planning process into “a matter of weighing things up”, since “everyone has different interests” and the elaboration of an overall vision resembles more a “give and take” rather than a deliberative process [49]. Acknowledging the gap between the idealized planning process described by SUMP and the current practice, we focused on organisational and cultural barriers that limit the collective envisioning process. In this regard, the city planners identified the political level as “sometimes contradictory” around fundamental issues for mobility. Such contradictions and lack of a shared understanding are “of course also carried down” to the operational level [47]. In front of these conflicts, the political decisionmaker indicated that the planning requires tools that support the process [46] by making different ideas more comprehensible: “I don’t think we would entrust the creativity for proposed solutions to software, but rather to us, who know the city and are people. I think that you can play these ideas that you have, which may also be crazy ideas, in a model (…) to see how useful the idea is” [46]. Furthermore, as one of the researchers mentioned, such a tool should also enable the consolidation of (un)expected visions: “(…) not this functional logic of showing you in any case how it will be, now you know, now you can make your policy, but rather show improbable futures. Improbable, but desirable futures” [49].
Developing a Stakeholder-Centric Simulation Tool to Support …
73
In short, our first challenge is to support stakeholders when presenting their ideas to others fostering organisational change towards an integrated planning approach. In order to strengthen the envisioning process, the tool should be conceived as a playfield, where geographical and legal constraints to planning, the current mobility offers and the key demographic characteristics of the local population as well as their interplay are faithfully represented. In the framework of such a realistic playfield, new mobility ideas can be formalised, implemented and tested under different conditions, allowing stakeholders to experiment with new mobility concepts—such as on-demand busses, mobility hubs, sharing and pooling offers— that are not yet implemented in the area and that need to be formalized and co-modeled together with the stakeholders.
5.2 Challenge 2: Supporting Communication Policy coordination is needed to ensure consistency at the level of timing, spatial scope and implementation of mobility planning processes [18]. The current policy coordination in Potsdam takes the form of regular bilateral meetings, as for example weekly encounters among the city transport planners and the public transport company, or yearly reunions between the city transport planners and the sharing mobility providers. According to our interviews, that raises communication problems. As a representative from the local public transport operator stated, “communication is the most important thing in this topic. We actually have two points: one is the future planning, but the other is also the current construction process” [48]. In a city like Potsdam, urban planning requires the participation of the same stakeholders—often the same people—that draw on the same financial resources. A tool that assists the stakeholders to tackle “this coordination among each other”, and that “simplifies” the communication among the different actors is yet to be developed. Such a tool should “bring in the colleagues a lot” [48], that is, integrate the specific knowledge of the experts of the different sectors. Next to this horizontal integration, the planners insist on a better vertical integration that intertwines the different hierarchies of decision-making throughout all stages of the planning process [48, 50]. For this purpose, the tool should support the planning process not only at the level of data, but also as a platform for communication, the aforementioned exchange of ideas. As summarized by one of the researchers, “the big innovation would not be the tool itself, but much more the interaction among stakeholders” [51]. In this context, urban planning is a reciprocal process, “a field of tension where you have to find the right language in the first place”. There is a discursive dimension that consists of the explanation of planning options and the underlying values in a transdisciplinary setting. As such, the communication process involves an interpretative process: “it is actually a greater challenge to first understand exactly which constraints lead to which decisions and which pain-points and which levers are behind them” [49].
74
D. Dametto et al.
As a consequence, our second challenge is to develop a tool that in order to promote the exchange of ideas, builds up on the learning processes that take place during planning. This consists of synthesising different datasets and the relations among them in order to highlight relationships such as for example between demographic development and mobility demand, or land use and traffic flows. By doing this, the stakeholders should be able to identify constraints, assumptions and pain-points that lay at the heart of their ideas. Combining such a tool with a set of participatory methods for leading stakeholders through the whole process can eventually stimulate discussion about the mobility concepts and facilitate the mutual presentation of ideas.
5.3 Challenge 3: Increasing Credibility One of the stakeholders pointed out during the interview that a model must have “good and reliable data”, but this data is represented according to different and “not necessarily explicit” logical assumptions, which raises ethical questions when making decisions based on the model [49]. As the political decision-maker phrased it: “how do you create transparency in this real engineering work?” [46]. While a tool that illustrates different mobility concepts and scenarios could have a big impact in empowering and facilitating alignment between different stakeholders during the envisioning phase of integrated planning, it must move on a thin line between the need for transparency and comprehensibility on the one hand, and validity on the other hand. While validation methods such as backcasting—suggested by one of the stakeholders during the first project workshop—are a valuable means for increasing reliability and trust in the tool, a broader approach is still needed. Enhancing a collective visioning process through digital tools implies a deep cultural change. Hence, the question lying at the ground of our third challenge is “how do those tools harmonize—not only in terms of content—with the users? Are these completely new logics of action, logics of operation that have to be learned, and if so, where can they be learned?” [49]. Engaging stakeholders from an early stage of tool development is a path to increase credibility. The user-centric approach we propose is—from this perspective—a promising strategy for focusing on stakeholders’ needs and—by doing this—increasing their commitment and trust as well as their confidence in the tool.
6 Key Takeaways and Further Steps Drawing on the premises of the integrated planning approach and the perspectives of the stakeholders, our goal was to identify what functionalities a tool should serve in order to support an integrated mobility planning process. In this context, we observed that the needs of the experts in the mobility planning of Potsdam meet the
Developing a Stakeholder-Centric Simulation Tool to Support …
75
issues addressed by the recommendations of SUMP. By engaging with stakeholders in a participatory modelling process, we have attained a better understanding of these issues and translated them into three challenges that provide us with the core guidelines for our tool development: • The tool should enable the visualisation of complex urban relationships while assisting stakeholders in integrating such considerations in the early phases of planning. • The tool should take the shape of a platform for collective exchange and testing of stakeholders’ ideas under different scenarios and conditions. • The platform is the result of supplying the tool with methods to trace back and make explicit the underlying assumptions and the expected outcomes of different mobility concepts. • Through methods to operationalise innovative mobility concepts, the tool could enhance a fruitful discussion to support coordination and increase the maturity level of those concepts. • The tool must ensure its validity at the same time providing transparency and comprehensibility for stakeholder interaction. These guidelines stand in a relationship of mutual tension: building a realistic platform for visualizing mobility scenarios implies many data sources and interactions that increase the model’s complexity, which jeopardise the transparency and acceptance of the model. To a large extent, it is a matter of weighing things up and the main advantage of our user-centric approach consists exactly in openly thematizing this tension with the stakeholders. At the same time, the generalisability of the findings is limited and should be carefully considered in future works. This way, our participatory modelling approach enables us to leverage different types of local knowledge and to address the complexity of urban planning. At the same time, it calls for questioning the underpinnings of our research. On the one hand, the development of the tool in itself as a realistic representation of shared knowledge and on the other hand, the interactions with the model as a meaningful tool to increase cooperation lead us to revise our role as researchers to empower stakeholders when tackling sustainable mobility decisions. In the coming sprints of the project, we will work together with stakeholders to integrate their narratives into quantitative features of the model. We will then test its functionality in this regard with a participatory modelling workshop and define the next sprint loops to increase acceptance and usability of the simulation tool. We will support this process by the preparation of a quali-quantitative evaluation tool to assess user acceptance. It is our goal to support stakeholders to communicate their ideas to third parties and to stimulate a collective discussion about the kind of city we want to live in. Acknowledgements SmartUpLab is funded by the European Regional Development Fund (ERDF, Europäische Fonds für regionale Entwicklung - EFRE in German), grant number 85037057.
76
D. Dametto et al.
References 1. Russo, P., Lanzilotti, R., Costabile, M.F., Pettit, C.J.: Adoption and use of software in land use planning practice: a multiple-country study. Int. J. Hum.-Comput. Interact. 34(1), 57–72 (2017) 2. Devlin, C.: Digital social innovation and the adoption of #PlanTech: the case of Coventry city council. Urban Plan. 5(4), 59–67 (2020) 3. Billger, M., Thuvander, L., Wästberg, B.S.: In search of visualization challenges: the development and implementation of visualization tools for supporting dialogue in urban planning processes. Environ. Plan. B: Urban Anal. City Sci. 44(6), 1012–1035 (2017) 4. Étienne, M. (Ed.).: Companion Modelling: A Participatory Approach to Support Sustainable Development. Springer Science & Business Media. Springer, Dordrecht (2013) 5. Van Wee, B.: Toward a new generation of land use transport interaction models. J. Transp. Land Use 8(3), 1–10 (2015) 6. Acheampong, R.A., Silva, E.A.: Land use–transport interaction modeling: a review of the literature and future research directions. J. Transp. Land Use 8(3), 11–38 (2015) 7. Wegener, M.: Land-use transport interaction models. Handbook of Regional Science, pp. 229– 246 (2021) 8. González-Méndez, M., Olaya, C., Fasolino, I., Grimaldi, M., Obregón, N.: Agent-based modeling for urban development planning based on human needs. conceptual basis and model formulation. Land Use Policy 101, 105110 (2021) 9. Næss, P., Strand, A.: What kinds of traffic forecasts are possible? J. Critical Realism 11(3), 277–295 (2012) 10. Raghothama, J., Meijer, S.: Distributed, integrated and interactive traffic simulations. In: 2015 Winter Simulation Conference (WSC), pp. 1693–1704. IEEE (2015) 11. Johnson, M.G.: City in code: the politics of urban modeling in the age of big data. Open Philos. 3(1), 429–445 (2020) 12. Barreteau, O., Bots, P., Daniell, K., Etienne, M., Perez, P., Barnaud, C., Bazile, D., Becu, N., Castella, J. C., Darè, W., Trebuil, G.: Participatory approaches. In Simulating Social Complexity, pp. 253–292. Springer, Cham (2017) 13. Bousquet, F., Etienne, M., D’Aquino, P.: Introduction. In: Companion Modelling: A Participatory Approach to Support Sustainable Development, pp. 1–12. Springer, Dordrecht (2014) 14. Vigar, G.: The four knowledges of transport planning: enacting a more communicative, transdisciplinary policy and decision-making. Transp. Policy 58, 39–45 (2017) 15. Diller, C., Hoffmann, A., Oberding, S.: Rational versus communicative: towards an understanding of spatial planning methods in German planning practice. Plan. Pract. Res. 33, 244–263 (2018) 16. Batty, M.: Inventing Future Cities. MIT Press (2018) 17. Perez, P., Banos, A., Pettit, C.: Agent-based modelling for urban planning current limitations and future trends. In: International Workshop on Agent Based Modelling of Urban Systems, pp. 60–69. Springer, Cham (2016) 18. Rupprecht Consult (eds): Guidelines for developing and implementing a sustainable urban Mobility Plan, 2nd ed., Köln (2019). https://www.eltis.org/mobility-plans/sump-guidelines. Accessed 03 Aug 2021 19. Werland, S.: Diffusing sustainable urban mobility planning in the EU. Sustainability 12(20), 8436 (2020) 20. Horn, B., Kiel, T., von Lojewski, H.: Sustainable urban mobility for all agenda for a mobility transition from municipal standpoint A position paper by the association of German Cities. (2021). https://www.staedtetag.de/files/dst/docs/Publikationen/Publicationsin-English/position-paper-sustainable-urban-mobility-for-all.pdf. Accessed 03 Aug 2021 21. Horni, A., Nagel, K., Axhausen, K.W. (Eds.): The Multi-Agent Transport Simulation MATSim. Ubiquity Press (2016)
Developing a Stakeholder-Centric Simulation Tool to Support …
77
22. Lopez, P.A., Behrisch, M., Bieker-Walz, L., Erdmann, J., Flötteröd, Y.P., Hilbrich, R., Lücken, L., Rummel, J., Wagner, P., Wießner, E.: Microscopic traffic simulation using sumo. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 2575–2582. IEEE (2018) 23. Louen, C., Horn, D.: Simulation game for future mobility–support tool for the discussion process about scenarios of future mobility in SUMP processes. In: Real Corp 2014 – Plan it smart! Clever Solutions for Smart Cities. Proceedings of 19th International Conference on Urban Planning, Regional Development and Information Society, pp. 525–531. CORP– Competence Center of Urban and Regional Planning (2014) 24. Shrestha, R., Köckler, H., Flacke, J., Martinez, J., Van Maarseveen, M.: Interactive knowledge co-production and integration for healthy urban development. Sustainability 9(11), 1945 (2017) 25. Flacke, J., Shrestha, R., Aguilar, R.: Strengthening participation using interactive planning support systems: a systematic review. ISPRS Int. J. Geo Inf. 9(1), 49 (2020) 26. Crooks, A.T., Heppenstall, A.J.: Introduction to agent-based modelling. In: Agent-Based Models of Geographical Systems, pp. , 85–105. Springer, Dordrecht (2012) 27. Rudolph, F., Black, C., Glensor, K., Hüging, H., Lah, O., McGeever, J., Mingardo, G., Parkhurst, G., Plevnik, A., Shergold, I., Streng, M.: Decision-making in sustainable urban mobility planning: common practice and future directions. World Transp. Policy Pract. 21(3), 54–64 (2015) 28. Basu, R.R., Banerjee, P.M., Sweeny, E.G.: Frugal innovation: core competencies to address global sustainability. J. Manag. Glob. Sustain. 1(2), 63–82 (2013) 29. Bhatti, Y.A.: What is frugal, what is innovation? Towards a theory of frugal innovation. SSRN Electron. J. (2012) 30. Priebe, M., Schröder, T., Szczepanska, T., Higi, L.: Participatory modeling as a tool for communicative planning: a frugal approach. Currently under review (2021) 31. Higi, L., Schröder, T., Michel, A., Tauch, A.: PaSyMo: Developing and Testing a Participatory Modeling Toolbox for Urban Systems. SUMO Conference, From Traffic Flow to Mobility Modeling (2020). https://www.youtube.com/watch?v=W1OVhEECDYY. Accessed 03 Aug 2021 32. MaaS4P: Intelligente und automatisierte Mobilität in Potsdam (2021). https://www.fh-pot sdam.de/forschen/projekte/projekt-detailansicht/project-action/maas4p-intelligente-und-aut omatisierte-mobilitaet-in-potsdam/. Accessed 03 Aug 2021 33. MaaS L.A.B.S.(2021). https://www.maas4.de/. Accessed 03 Aug 2021 34. Ortgiese, M., Berkes, C., Recknagel, C.: Dokumentation Konzeptphase Mobility as a Service for Potsdam (MaaS4P). Potsdam: FHP (2019) 35. Aguilera, A., Cacciari, J.: Living with fewer cars: review and challenges on household demotorization. Transp. Rev. 40(6), 796–809 (2020) 36. Barreteau, O., Bousquet, F., Étienne, M., Souchère, V., d’Aquino, P.: Companion modelling: a method of adaptive and participatory research. In: Companion Modelling: A Participatory Approach to Support Sustainable Development, pp. 13–40. Springer, Dordrecht (2014) 37. Knapp, J., Kowitz, B., Zeratsky, J.: Sprint. Wie man in nur fünf Tagen neue Ideen testet und Probleme löst. Redline Verlag, München (2017) 38. Taillandier, P., Grignard, A., Marilleau, N., Philippon, D., Huynh, Q. N., Gaudou, B., Drogoul, A.: Participatory modeling and simulation with the GAMA platform. J. Artif. Soc. Soc. Simul. 22(2) (2019) 39. Taillandier, P.: Traffic simulation with the GAMA platform. In: Eighth International Workshop on Agents in Traffic and Transportation, p. 8 (2014) 40. Popiolek, R., Dametto, D., Tauch, A.: Geodaten für Mobilitätssimulationen – Verarbeitung von Open Geodata zum Einsatz im Simulationstool GAMA unter Verwendung von QGIS. In: AGIT - Journal für Angewandte Geoinformatik, p. 7 (2021) 41. Higi, L., Schröder, T., Dametto, D., Michelini, G., Michel, A., Tauch, A.: PaSyMo + SmartUpLab: developing and testing a participatory modelling toolbox for urban systems. In: GAMA days 2021 (2021) . https://www.irit.fr/GamaDays2021/wp-content/uploads/2021/06/GAMAdays-2021-abstracts.pdf. Accessed 19 Aug 2021
78
D. Dametto et al.
42. Meuser, M., Nagel, U.: ExpertInneninterview: Zur Rekonstruktion spezialisierten Sonderwissens. In: Becker R., Kortendiek B. (eds.) Handbuch Frauen- und Geschlechterforschung, pp. 376–379. VS Verlag für Sozialwissenschaften. GWV Fachverlage GmbH, Wiesbaden (2008) 43. Thornberg, R., Charmaz, K.: Grounded theory and theoretical coding. In: Flick, U. (ed.) The SAGE Handbook of Qualitative Data Analysis, pp. 153–183. SAGE, London (2014) 44. Atlas.ti 9 for Mac (2020) 45. Lindenau, M., Böhler-Baedeker, S.: Citizen and stakeholder involvement: a precondition for sustainable urban mobility. Transp. Res. Procedia 4, 347–360 (2014) 46. Interview with a decision-maker, Potsdam, 7th July 2020 47. City transport planner#1, city transport planner#2, Potsdam, 14th July 2020 48. Representative of the public transport provider#1, Potsdam, 16th July 2020 49. Mobility researcher#1, Potsdam, 29th June 2020 50. Representative of the public transport provider#2, Potsdam, 6th June 2020 51. Mobility researcher#2, Potsdam, 30th June 2020
Support Local Empowerment Using Various Modeling Approaches and Model Purposes: A Practical and Theoretical Point of View Kevin Chapuis , Marie-Paule Bonnet , Neriane da Hora , Jôine Cariele Evangelista-Vale , Nina Bancel, Gustavo Melo, and Christophe Le Page Abstract Theoretical trends in agent-based modeling (ABM) draw sharp lines that usually limit the expressiveness of models to fit their methodological box: KISS puts emphasis on parsimonious and tractable model for systematic simulation analysis, KIDS focuses on the use of data for model specification and simulation validation, while KILT highlights the involvement of stakeholders to build representative model and turn simulation into a learning tool. In this proposal, we stress the benefit to break the lines and reinstate the various agent-based model purposes, focuses and supports in the particular context of designing transformative ABM. The proposed perspective to burst methodological boxes is based on the creation of models to mix social actors and issues based on sub-models supporting a variety of theoretical approaches. To back this methodological claim, we detailed a 10 years-long modeling effort to represent and support community based management of renewable resources in the wetlands of the lower Amazon, in the Para state of Brazil. Keywords Participatory agent based model · Transformative agent based model · Managing of the commons · Socio-environmental models · Empowering local populations
K. Chapuis (B) · M.-P. Bonnet UMR 228 ESPACE-DEV, IRD, University of Montpellier, Montpellier, France e-mail: [email protected] URL: https://www.espace-dev.fr/en/home/ N. da Hora · J. C. Evangelista-Vale · G. Melo Centro de Desenvolvimento Sustentável-CDS, Universidade de Brasília, UnB, Brasília, Distrito Federal, Brazil N. Bancel · C. Le Page UMR Sens, CIRAD, Montpellier, France G. Melo Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_7
79
80
K. Chapuis et al.
1 Introduction Agent based modeling (ABM) approach has a long and rich history in science that takes its roots in fundamental physics in the 30’ [10] and theoretical works on complex systems in the mid century (e.g. Von Neumann and Schelling). More recently in the late 80’ the field begins to emerge as a domain of research in itself, with a growing interest in using such a technique to study social dynamics: the seminal work of [1] has taken the lead of a domain of research that focuses on simple model to study the emergence of complex social phenomena, represented by the famous catch phrase Keep It Simple, Stupid (KISS). Less than two decades after this impulse that has had a strong impact on agent based model researches, [7] summarize the urgent need to enlarge the theoretical scope of ABM and embrace the deep evolution of digital data that were by that time widely accessible. In this regard they propose to also consider ABM design based on available data, from the conception to the validation through its simulation, represented by a proposed update to the KISS acronym renamed to KIDS, for Keep it Descriptive, Stupid. Lately, participatory approaches have pointed out the inability of ABM to produce outcomes that make sens for the actors facing the studied issues. The KILT principle [15] re-introduces contributors as major drivers for model design and evaluation, with the goal of enabling the local population to learn and to build collective responses through participatory modeling. In this paper, we advocate for the reinstatement of the diverse approaches to unleash the full potential of agent based approaches in the particular frame of participatory agent based modeling. We claim that the ability to tackle complex socioecological issues and to socially embed a solution based on agent based model and simulation, can only be achieved through the synergies of forces each classically opposed approaches can offer: KISS proposes efficient ways to systematically explore models, KIDS proposes methodologies to specify and expend models based on empirical data, while KILT proposes procedures to engage local actors in the co-creation and social usability of models. In this article, based on participatory modeling research on common pool resource management (see scope in Sect. 2), we examine how several models covering various methodological approaches, linked to a common thematic and social ground, may empower local populations (Sect. 4). To do so, we describe the agent-based related research we carried out during the last 10 years on the collective management of renewable resources in the lower Amazon (Sect. 3).
2 Scope: Modeling and Simulation for Local People Empowerment in Common Pool Resource Management This section frames the discussion around the ability of models and simulation to empower local people engaged in common pool resource management issues. The first sub section gives a brief idea of the modeling and simulation perspectives that
Support Local Empowerment Using Various Modeling Approaches …
81
has been used to leverage local knowledge and understanding, roughly introducing participatory modeling and agent based simulation for local people empowerment. The second sub section focuses more closely on the issue related to common pool resource management in particular in the floodplains and quickly introduces theoretical background associated with participatory modeling in this context.
2.1 Local People Empowerment Using Agent Based Modeling and Simulation Techniques Participatory modelling is a rather generic term that is used as an umbrella to refer to several particular schools or researchers using specific names as trademarks of these groups [25]. Participation in the modelling process can take place in various phases and with various degrees of intensity. There is therefore a wide diversity of ways to involve stakeholders and activities related to this involvement [2]. Companion Modeling Approach Companion Modeling, also known as ComMod, combines the construction of agent-based models with role-playing games and other tools which provide hands-on learning to support sustainable development [4]. Key to companion modeling is the idea that the modeler figuratively walks together with stakeholders to gain more knowledge or improve collective decision-making [8]. However, companion modeling processes do not have “transformation” or making changes as their explicit objective, but they hope to aid this process by enhancing knowledge of how the system works and accompanying the decision-making process [5]. Transformative Aspects of ABM are largely conceived based on support they can provide to policy makers or, in a broad and fuzzy sense, for decision making [9]. On one hand, the number of ABM actually driving or monitoring the evolution of a socio-ecological issue remains fairly low, as it has been highlighted by recent usages during the pandemic of Covid 19 [21] or other well established fields of research, like energy policy [6] or transport modeling [13]. On the other hand, transformative research aims at providing breakthrough changes either in the scientific conception of social issues or directly in the studied social context (closer to the transformative paradigm) [5]. To bridge the gap between policy making support and transformative ABM, there is an urgent need to reinstate a socially embedded participatory approach within a more data based modeling approach, that makes it possible to extensively explore the consequences of various ways to cope with socio-ecological issues.
82
K. Chapuis et al.
2.2 Modeling and Simulation of Common Pool Resource Management Issues in Floodplains Wetland ecosystems found in floodplains are particular environments, at the interface between terrestrial and aquatic realms (ecotone) which are naturally very diverse and productive, enabling the development of crops with a short growth cycle, such as rice or corn, and sheltering important fish populations. Thus, all over the world, the floodplains have long been locations explored by human populations who exploit both the natural richness of the waters for the fish and crustaceans, and the fertility of the soil for crops. These environments are, however, among the most threatened by current anthropic and climatic pressures (IPBES, 2019). The construction of dykes and dams disrupts the natural flooding regime of rivers, progressively drying up wetlands and reducing soil fertility. These phenomena can be amplified by climate change which results in an increased recurrence of extreme hydrological events (flood, drought) or the drying up of rivers. Since the 2000s, in connection with plans to restore rivers to give back to floodplains their important role of flood attenuation and revitalize aquatic ecosystems, many models have been developed with the objective of proposing floodplain management plans, shared by the various types of stakeholders. Given the very different stakes and points of view on these environments, [20] emphasizes the importance of social learning to achieve viable restoration plans. In this vein, for example [22] proposed the development of a game allowing stakeholders involved in water management at the watershed level and farmers to explore the consequences of their interactions; [19] preferred a mediation modeling approach in which stakeholders are led to find a consensus by working together on a mathematical model.
3 Application: The Management of Renewable Resources in the Floodplains of the Lower Amazon The Curuai floodplain (várzea do Lago Grande do Curuai—see Fig. 1) is a 120 km long and 40 km wide floodplain segment, Pará State, Brazil distributed in three municipalities: Juruti (West), Óbidos (North) and Santarém (East). It is located on the southern margin of the Amazon river, in front of Óbidos town and 900 km upstream from the Atlantic Ocean. It covers an area of about 3 850 km2 including its local watershed. The floodplain is composed of numerous large and shallow lakes, separated from the Amazon river by narrow levees. Lakes are permanently or temporarily interconnected by channels (paraná) and to the Amazon river. Lakes are also connected to uplands (non-flooded area) by several small rivers (igarapés) at the South. The extension of the lakes varies greatly (about five-fold) during the hydrological year due to the great seasonal variation of the water level and very flat relief. In the Lago Grande do Curuai, there are over 50 floodplain communities composed of ten to a hundred numbers of families, based on a traditional occupation
Support Local Empowerment Using Various Modeling Approaches …
83
Fig. 1 Geographical location of the Curuai floodplain along the lower Amazon river in the Para state, north of Brazil. The map displays major local communities, lakes during low water season and igarapes that connect them with the high lands
of the territory. In natural levees, inhabitants have their residences and develop the crops, a private zone for them. Floodplain’s lakes are very important for fishing and considered as communal property by the residents [18]. The floodplain is also occupied by large ranchers, private properties that cover a great part of communities’ territories. Despite this traditional occupation, floodplains are officially owned by the federal government [24]. In 2005, the Land Reform and Colonization Institute of Brazil (INCRA) began the implementation of several agroextractivist settlements projects on the lower Amazon to promote floodplain lands regularization. This process included the Great Lake of Curuai as an agroextractive settlement, with an area of 290 000 ha, in which the residents received the right for collective use of natural resources of the area based on a Use Plan constructed in a participatory way [17]. Floodplain inhabitants of Curuai make their living based on three main activities: agriculture, fishing and cattle ranching. Among them, fishing provides the main source of protein and a great part of the family’s income [12]. Their resource management is based on the Use Plan from the Agroextractive settlement, fishing agreements and environmental legislation for fishing stocks protection, forming the basis for a co-management system [16], that became a promising strategy for sustainable use of Amazon floodplains. However, in the last decades, the sustainability of their living way is facing several threats generated by a combination of factors: commercial fishing expansion, cattle ranching increasing and climate and hydro-climatic changes, a global threat that can potentialize the effects of local problematics [11, 14, 18]. This ongoing situation generates several social conflicts, for example, between fishermen living in the communities and outsider fishermen, the last focused on large scale commercial fishing (called ‘geleiras’ by local people), who enter the communal várzea lakes and are claimed to not follow local fishing agreements or even federal regulations. In some communities, there may be conflicts between local fishermen because of the lack of compliance of some members, mostly related to overfishing
84
K. Chapuis et al.
practices. In the case of large ranchers, some are helpful to community members, helping to protect várzea lakes, but in others cases, when they do not comply with environmental rules, their cattle may cause damage to the floodplain environment. So, mitigating social conflict and environmental degradation is a challenge not only for local people, but also for governments and other stakeholders from civil society to promote the sustainability of fisheries and the environmental conservation of floodplain areas.
4 The VarzeaSaga Set of Models: A Brief History of a 10-Year Long Modeling Process In this section we exposed models we designed the past 10 years in the context of floodplains issues described in the previous section: we tell the history of the VarzeaSaga modeling effort, that begins with the VarzeaViva hybrid model, then moving toward PescaViva game board model and continuing with the integrative BRAV-ABM toolbox. Build Knowledge from Trust, Design Models with Locals The Clim-FABIAM project (Climate change and Floodplain lake biodiversity in the Amazon basin: how to cope and help with ecologic and economic sustainability) financed by the Biodiversity Research Foundation (2012–2015) mobilized a multidisciplinary team that proposed to study the influence of climate change on aquatic biodiversity in floodplains, to assess how climate change is perceived by local populations and how they adapt, and finally to engage inhabitants in the search for new strategies and practices that could increase their livelihoods while preserving biodiversity. The challenge was to set up a process that would allow for a real dialogue with local actors, to confront scientific models and the perceptions of the inhabitants. We based ourselves on the ComMod approach, to progressively integrate the knowledge produced by the researchers and the knowledge of the different types of actors of the territory, in order to put them in discussion. We first established a partnership with FEAGLE, the federation responsible for the management of the Curuai P.A.E. and with a rural family school based in Curuai (C.F.R), which allowed us to get closer to the communities of the region and propose to them to work together on these issues. A first diagnostic allowed the selection of a micro-region to develop the companion modeling activity comprising 4 communities representative of the dynamics and concerns in the region: a community of fishermen in the várzea; a community at the edge of the lake, where breeding dominates; one at 10 km from the lake, on the “trans-lago”, the road which connects all the region, where agriculture is the main activity; and a more distant one, at 20 km from the lake, where new lands are opened. Through participatory activities and interviews conducted by anthropologists, the changes perceived by the local population were discussed. The increase in water levels during floods is one of their major concerns, as it forces them to redeploy their activities on dry land and sometimes even to abandon their homes. However,
Support Local Empowerment Using Various Modeling Approaches …
85
the main changes in their lives come from large-scale external pressures, such as industrial fishing, the herds of thousands of head sent by the fazendeiros (large landowners) of the neighboring municipalities to graze in the várzea, the possible opening of a bauxite mine in their territory, or the implantation of a dam upstream. In order to discuss the various results and hypotheses obtained by the teams in the field with local stakeholders, a model was gradually formalized. This model proposes a simplified representation of reality, which highlights the main issues related to adaptation. First, we built a role-playing game called “VarzeaViva” with the students of the rural family school [3]. The 4 communities we work with were represented by 4 agricultural properties. Each of the 16 players manages a property, which he must develop according to his desires and the constraints of reality: labor, money, cover, livestock and location. One of the collective issues of the game is the circulation of livestock between the várzea and the mainland pastures, forcing players without pasture at the time of the floods to find another player who can rent his pasture. Game sessions in each of the 4 communities, then gathering the 4 communities, allowed rich exchanges with the farmers and fishermen, notably about the constraints around the planning problems and the environmental and socio-economic constraints in the long term. During these sessions, the actors spontaneously discussed the impact of their activities on natural resources, but without always succeeding in making the cause and effect relationships explicit. In order to better formalize the relationships between human activities and the environment and to increase the horizons of the game sessions, a hybrid computer model has been built as an extension of the game: it consists in having the biophysical processes simulated by the machine and integrating the human activities decided by the players. Based on the same structure of the game that had been validated by the players, this model seeks to specify the impacts of the activities from the researchers’ data and to lengthen the simulation timeframe by simulating 4-year steps. The degradation of pastures and the decrease in fish stocks are thus better highlighted. Focus on Local Actors’ Issues to Enlarge ABM Scope This work has also highlighted a latent conflict between livestock activities (in particular the arrival of large herds from landowners) and artisanal fishing, even exacerbated by the development of industrial fishing. Because fishing is a crucial activity for locals (see Sect. 2), the inhabitants have expressed their concern about the decrease in stocks and the near disappearance of certain emblematic species such as the Pirarucu [18]. In other regions of Amazonia (state of amazonas), the stocks of this species have been restored thanks to a plan of regulation of the fishing in the lakes established by the communities themselves, and which also benefited other aquatic and terrestrial species while improving the subsistence goods of the riparians. In order to debate these issues with the inhabitants and to favour the development of a regulation plan in the floodplain, we have continued within the framework of a new project (BONDS) by accentuating our work on the question of fishing. We have set up a partnership with Z20, the fishermen’s union of the Santarém region, and propose to co-construct with them a role-playing game (PescaViva) that will allow to better understand and debate
86
K. Chapuis et al.
the interactions between environmental dynamics (degradation of várzea habitats, impacts of upland land-use changes, change of hydrological regimes), dynamics of resource use (collective management and over-fishing) and dynamics of fish stocks, to discuss the interest of new fishing agreements and relevance of the current norms. The PescaViva game is still in its early development, mainly due to the inability to benefit from participatory co-design in the context of the pandemic (see discussion in conclusion, Sect. 5). It represents an archetypal floodplain in the form of 3 lakes temporarily interconnected between them and with the river as depicted in the picture below (Fig. 2). When the water recedes, the floodplain is mainly covered by natural grassland. The banks of the lakes are more or less occupied by flooded forest, which is conducive to the reproduction and maintenance of biodiversity of fish species. This will allow us to discuss the issues of interactions between biodiversity and habitat degradation. Lakes and their channels represent the fishing zones to be selected by players. The hydrological dynamics is represented by three game boards (low water—little connection between the lakes with reduced number of fishing zones—high water— all lakes are connected and most of the plain is flooded—and medium water—board is in a middle stage, as it is depicted in Fig. 2). The density of fish (and therefore their capture) depends on the water level which evolve according to the hydrological seasons, i.e. it is lower during the high water season and higher during the low water season. The fish stock regenerates during the rising water season according to a simple logistic law weighted by the level of degradation of the environment. Recruits are integrated into the stock the following year. Twelve players are distributed in 4 communities. Each player initially owns a boat he will choose to move to the fishing zone of his choice during the game. The cost associated with this fishing depends on the distance between his community and the
Fig. 2 Picture of the game board during a test session of the early stage of game development
Support Local Empowerment Using Various Modeling Approaches …
87
chosen fishing area. The fishing capacity of a boat but also the density of fish and the number of boats in a fishing area determine the level of catch. A game round represents one year. It is made up of decisions to fish for each seasonal stage of the floodplain. Once this block of 4 temporal decisions has been made for the first year of the game, the same fishing pattern is reproduced for the 2 consecutive years to come. Fishing results are given to the players and they have the ability to change their seasonal fishing patterns in the 4th year of the game. The updated patterns are replicated for 2 more years and then a final year is played following the same schema. By the end of the 9th year, results are given to the participants, including fish stock (which have been kept hidden during the game) and the game session is debriefed. Toward an Inclusive Meta-model to Overstep Methodological Boxes and Enhance the Capabilities of Participatory ABMs Both prior role playing game hybrid models have been co-designed to bring models closer to the actual socioenvironmental issues faced by locals. They are meant to be played in situ to increase inhabitants’ collective awareness and help them undertake constructive collective actions. However, the outcome of VarzeaViva and PescaViva remains limited to actors and situations where they have been used, while most of the determinants of biodiversity depletion (fish stock collapse, soil degradation, etc.) are at the scale of the entire floodplain (e.g. commercial fishery) or even wider (e.g. upstream dams, climate change) and depend upon actors of broader concern (e.g. Federal and local state decision makers, IBAMA, INCRA) [16, 17]. To be able to enlarge the impact of our modeling effort, we need to upscale models to represent the entire area, include more data in order to represent in a realistic way behavior and impact of local activities as well as ecological dynamics between fish stock, hydrological changes and degradation of soil and inundated forest. In order to upscale and enlarge the scope of our models, we plan to bring the knowledge from participatory modeling into an ABM toolbox build using the Gama platform [23]: this one encapsulates modules on agent behavior (e.g. where and what to fish) and decisions (e.g. comply or not with fishing rules), collective agreements (e.g. landlocked parcel ownership), ecological dynamic (e.g. fish stock renewal and hydrological cycles) and spatial representation (e.g. lacks organization) co-designed with locals. Such modules focus on particular aspects of socio-ecological issues, each being attached to an abstract agent based representation (i.e. a species in Gama language) of actors and space in the floodplain. In practice, these modules can be reused to build in silico counterpart of role playing games they have been based upon. In theory growing an ABM toolbox from co-designed models makes it possible to scale up the participatory focus targeted by initial models, and enlarge its purpose, embracing more issues at the same time. We summarize the expected outcome in three main directions: – Ability to use massive data to specify realistic aspects of simulation models based on GIS or hydrological data – Ability to systematically re-use players behavior in role-playing games to calibrate autonomous agent actions and decisions
88
K. Chapuis et al.
– Ability to explore potential community based management strategies at the scale of the entire floodplain, with systematic analysis to validate model outcomes. The current development stage of the toolbox makes it possible to represent a variety of inhabitant activities, hydrological cycle with lakes ebb and flow and fish stock dynamic taking into account soil degradation and fish habitat ecological quality. However, there is still effort required in order to represent such aspects of floodplain livelihood at the scale of the region. Yet the process of growing such a toolbox from co-designed models yields promises to enlarge the scope of participatory modeling, opening the methodological box for systematic analysis with data driven simulation models.
5 Conclusion Summary: to open methodological boxes that drive ABMs design and bring them at a transformative level, we exposed a methodological process that consists in developing models with various scopes and approaches on a common thematic ground and field work. We briefly reported a 10-year long modeling effort to support community based management of renewable resources in floodplains of the lower Amazon river in the Para state of Brazil. We have designed two role playing game hybrid models (physical game and participatory simulation model) that focus on various issues of local activities, with a trans-purposes encompassing ABM architecture named BRAV-ABM (BONDS Realistic Autonomous Varzea Agent Based Model) aiming to bridge learning tools, descriptive models and simulation explorations in one comprehensive ABM toolbox. Limitations: the main limitation of our methodological proposal pertains to the timeline of our research plan: building an ABM toolbox to ease the use of co-designed participatory models in broader scope descriptive simulation takes us 10 years to gain trust of locals, while modeling of the various aspect of floodplain social and ecological dynamic is still pending and open. The last step has been made difficult by the current crisis, limitation we discuss in the paragraph below. Discussion: the context of Covid-19 has deeply questions our way to do science, especially in the context of participatory modeling where social distancing has been a game changer, but also in the way science should produce and transmit knowledge and bring society efficient and pervasive policy tools. In this troubling context, a methodology based on participatory modeling have to find ways to enhance remote exchange. From our perspective, the challenge has been to keep local actors in line with our modeling effort, trying to find local intermediaries to continue interact during workshops. Future work: on a theoretical level, we seek to build more comprehensive and explicit methodological guideline to design transformative ABM. As a practical counterpart, we seek to eventually test and co-design on the field our latest board game model PescaViva. Furthermore, we aim to strengthen the link with BRAV-ABM in order to
Support Local Empowerment Using Various Modeling Approaches …
89
propose a simulated version of the board game to foster virtual sessions and/or easily record players actions.
References 1. Axelrod, R., Hamilton, W.D.: The evolution of cooperation. Science 211(4489), 1390–1396 (1981). American Association for the Advancement of Science Section. https://doi.org/10. 1126/science.7466396, https://science.sciencemag.org/content/211/4489/1390 2. Barreteau, O., Bots, P., Daniell, K., Etienne, M., Perez, P., Barnaud, C., Bazile, D., Becu, N., Castella, J.C., Daré, W., Trebuil, G.: Participatory approaches. In: Edmonds, B., Meyer, R. (eds.) Simulating Social Complexity: A Handbook. Understanding Complex Systems, pp. 197–234. Springer, Berlin (2013) 3. Bommel, P., Bonnet, M.P., Coudel, E., Haentjens, E., Nunes, C., Melo, G., Nasuti, S.: Livelihoods of local communities in an Amazonian floodplain coping with global changes. In: iEMSs Proceedings, Toulouse, p. 8 (2016) 4. Bousquet, F., Barreteau, O., D’Aquino, P., Etienne, M., Boissau, S., Aubert, S., Le Page, C., Babin, D., Castella, J.: Multi-agent systems and role games: collective learning processes for ecosystem management (2002). https://hal.inrae.fr/hal-02582720 5. van Bruggen, A., Nikolic, I., Kwakkel, J.: Modeling with stakeholders for transformative change. Sustainability 11(3), 825 (2019). Multidisciplinary Digital Publishing Institute. https:// doi.org/10.3390/su11030825, https://www.mdpi.com/2071-1050/11/3/825 6. Castro, J., Drews, S., Exadaktylos, F., Foramitti, J., Klein, F., Konc, T., Savin, I., Bergh, J.: A review of agent–based modeling of climate–energy policy. WIREs Clim. Chang. 11(4) (2020). https://doi.org/10.1002/wcc.647, https://onlinelibrary.wiley.com/doi/abs/10.1002/wcc.647 7. Edmonds, B., Moss, S.: From KISS to KIDS - an ‘anti-simplistic’ modelling approach. In: Davidsson, P., Logan, B., Takadama, K. (eds.) Multi-Agent and Multi-Agent-Based Simulation. Lecture Notes in Computer Science, pp. 130–144. Springer, Berlin (2005) 8. Étienne, M., Bousquet, F., Le Page, C., Trébuil, G.: Transferring the ComMod approach. In: Étienne, M. (ed.) Companion Modelling: A Participatory Approach to Support Sustainable Development, pp. 291–309. Springer Netherlands, Dordrecht (2014) 9. Gilbert, N., Ahrweiler, P., Barbrook-Johnson, P., Narasimhan, K.P., Wilkinson, H.: Computational modelling of public policy: reflections on practice. J. Artif. Soc. Soc. Simul. 21(1), 14 (2018) 10. Hanappi, H.: Agent-based modelling. History, essence, future (2017) 11. Isaac, V.J., Castello, L., Santos, P.R.B., Ruffino, M.L.: Seasonal and interannual dynamics of river-floodplain multispecies fisheries in relation to flood pulses in the lower Amazon. Fish. Res. 183, 352–359 (2016). https://doi.org/10.1016/j.fishres.2016.06.017, https://www. sciencedirect.com/science/article/pii/S0165783616302065 12. Isaac, V.J., De Almeida, M.C.: El consumo de pescado en la Amazonía brasileña. COPESCAL. Documento Ocasional (13), I (2011). Food and Agriculture Organization of the United Nations 13. Kagho, G.O., Balac, M., Axhausen, K.W.: Agent-based models in transport planning: current state, issues, and expectations. Procedia Comput. Sci. 170, 726–732 (2020). https://doi.org/10. 1016/j.procs.2020.03.164, https://linkinghub.elsevier.com/retrieve/pii/S187705092030627X 14. Katz, E., Lammel, A., Bonnet, M.P.: Climate change in a floodplain of the Brazilian Amazon: scientific observation and local knowledge. In: Welch-Devine, M.E., Sourdril, A.E., Burke, B.E. (eds.) Changing Climate, Changing Worlds: Local Knowledge and the Challenges of Social and Ecological Change. Ethnobiology, pp. 123–144. Springer (2020). https://hal.archives-ouvertes. fr/hal-02613684 15. Le Page, C., Perrotton, A.: KILT: a modelling approach based on participatory agent-based simulation of stylized socio-ecosystems to stimulate social learning with local stakeholders. In:
90
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
K. Chapuis et al. Sukthankar, G., Rodriguez-Aguilar, J.A. (eds.) Autonomous Agents and Multiagent Systems, pp. 31–44. Springer International Publishing, Cham (2017) McGrath, D.G., Cardoso, A., Almeida, O.T., Pezzuti, J.: Constructing a policy and institutional framework for an ecosystem-based approach to managing the lower Amazon floodplain. Environ. Dev. Sustain. 10(5), 677 (2008). https://doi.org/10.1007/s10668-008-9154-3 McGrath, D.G., Castello, L., Almeida, O.T., Estupiñán, G.M.B.: Market formalization, governance, and the integration of community fisheries in the Brazilian Amazon. Soc. Nat. Resour. 28(5), 513–529 (2015). https://doi.org/10.1080/08941920.2015.1014607, http://www. tandfonline.com/doi/full/10.1080/08941920.2015.1014607 McGrath, D.G., de Castro, F., Futemma, C., de Amaral, B.D., Calabria, J.: Fisheries and the evolution of resource management on the lower Amazon floodplain. Human Ecology 21(2), 167–195 (1993). https://doi.org/10.1007/BF00889358 Metcalf, S.S., Wheeler, E., BenDor, T.K., Lubinski, K.S., Hannon, B.M.: Sharing the floodplain: mediated modeling for environmental management. Environ. Model. Softw. 25(11), 1282–1290 (2010). https://doi.org/10.1016/j.envsoft.2008.11.009, https://linkinghub.elsevier. com/retrieve/pii/S1364815208002193 Pahl-Wostl, C.: The importance of social learning in restoring the multifunctionality of rivers and floodplains. Ecol. Soc. 11(1), art10 (2006). https://doi.org/10.5751/ES-01542-110110, http://www.ecologyandsociety.org/vol11/iss1/art10/ Squazzoni, F., Polhill, J.G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, E., Borit, M., Verhagen, H., Giardini, F., Gilbert, N.: Computational models that matter during a global pandemic outbreak: a call to action. J. Artif. Soc. Soc. Simul. 23(2), 10 (2020) Stefanska, J., Magnuszewski, P., Sendzimir, J., Romaniuk, P., Taillieu, T., Dubel, A., Flachner, Z., Balogh, P.: A gaming exercise to explore problem-solving versus relational activities for river floodplain management: a game to explore problem-solving vs. relational activities. Environ. Policy Gov. 21(6), 454–471 (2011). https://doi.org/10.1002/eet.586, http://doi.wiley.com/10. 1002/eet.586 Taillandier, P., Grignard, A., Marilleau, N., Philippon, D., Huynh, Q.N., Gaudou, B., Drogoul, A.: Participatory modeling and simulation with the GAMA platform. J. Artif. Soc. Soc. Simul. 22(2), 3 (2019) Theophilo Folhes, R.: A gênese da transumância no baixo Rio Amazonas: arranjos fundiários, relações de poder e mobilidade entre ecossistemas. Boletim Goiano de Geografia 38(1), 138– 158 (2018). https://doi.org/10.5216/bgg.v38i1.52818, https://www.revistas.ufg.br/bgg/article/ view/52818 Voinov, A., Bousquet, F.: Modelling with stakeholders. Environ. Model Softw. 25(11), 1268– 1281 (2010). Elsevier
Heterogeneous and Diverse Economic Agents
Exploring Coevolutionary Dynamics Between Infinitely Diverse Heterogenous Adaptive Automated Trading Agents Nik Alexandrov, Dave Cliff, and Charlie Figuero
Abstract We report on a series of experiments in which we study the coevolutionary “arms-race” dynamics among groups of agents that engage in adaptive automated trading in an accurate model of contemporary financial markets. At any one time, every trader in the market is trying to make as much profit as possible given the current distribution of different other trading strategies that it finds itself pitched against in the market; but the distribution of trading strategies and their observable behaviors is constantly changing, and changes in any one trader are driven to some extent by the changes in all the others. Prior studies of coevolutionary dynamics in markets have concentrated on systems where traders can choose one of a small number of fixed pure strategies, and can change their choice occasionally, thereby giving a market with a discrete phase-space, made up of a finite set of possible system states. Here we present first results from two independent sets of experiments, where we use minimal-intelligence trading-agents but in which the space of possible strategies is continuous and hence infinite. Our work reveals that by taking only a small step in the direction of increased realism we move immediately into high-dimensional phase-spaces, which then present difficulties in visualising and understanding the coevolutionary dynamics unfolding within the system. We conclude that further research is required to establish better analytic tools for monitoring activity and progress in co-adapting markets. We have released relevant Python code as opensource on GitHub, to enable others to continue this work. Keywords Financial markets · Agent-based computational economics · Coadaptation · Coevolution · Automated trading · Market dynamics
N. Alexandrov · D. Cliff (B) · C. Figuero Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK e-mail: [email protected] N. Alexandrov e-mail: [email protected] C. Figuero e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_8
93
94
N. Alexandrov et al.
1 Introduction In the past 20 years most of the world’s major financial markets have seen a sharp rise in the level of automated trading on those markets, with many human traders being replaced by adaptive algorithmic “robot traders” at the point of execution. Although this has been a significant shift, affecting both patterns of employment and the dynamics of the markets concerned, it can plausibly be argued that at a macrolevel little has changed: these major markets are still populated by traders working on behalf of major financial institutions such as investment banks or fund-management companies; the difference is just that now those institutions are represented in the markets not by teams of human traders but by teams of robots. To be more precise, it is more often the case that within any one institution entire teams of human traders have been replaced by a single monolithic automated trading system that does the work previously performed by tens of hard-working human traders. The success or failure of any one automated trading system is determined primarily by how much profit it can generate, but underlying that simple observation is a circularity. In any realistic market scenario, the profitability of a given robot trader R1 will be determined at least in part by the extent to which its actions in the market are well-tuned to the likely reactions of other traders in that market. Thus, in contemporary markets, R1 is likely to be designed to adapt its trading behavior to the current market circumstances, and yet those circumstances are significantly determined by the behavior of other traders in the market, most of which are robots R2, R3, R4 and so on, each of which are themselves adapting to the circumstances they experience in the market, which are to some extent influenced by the actions and reactions of R1. In the natural world, in the Darwinian survival-of-the-fittest interactions among evolving species of organisms, exactly this kind of circular interaction and dependency is commonplace. And is known technically as coevolution. Theoretical biologists have studied coevolution for many years, and have developed various gametheoretical analyses that give insights on the dynamics of the arms-races between competitively coevolving species: see e.g. [18]. In this paper we report on empirical simulation studies for which the starting-point draws direct inspiration from those theoretical biology studies of coevolutionary dynamics. Our motivation here is to try to better understand, to gain insights into, the practical extent to which the various adaptive trading systems in a market are affecting each other, and specifically to investigate whether the population of adaptive traders is ever likely to converge on a situation where all of the traders are well-adapted to each others’ behavior yet each trader is not as profitable as it could otherwise be. That is: could the competitive interactions and adaptations of traders in the market collectively converge on a stable set of trading behaviors that are sub-optimal? And, if so, can we recognize when that has happened, or when it is about to happen? Similarly, might we be able to identify when the coevolutionary dynamics are about to lead to a flash-crash? We have commenced a sequence of empirical studies, starting with
Exploring Coevolutionary Dynamics Between Infinitely Diverse …
95
minimal but realistic simulations that are principled approximations of present-day highly automated financial markets. Specifically, our ultimate aim has been to create agent-based models (ABMs) involving N A agents where each agent represents one financial legal entity (i.e. either an individual independent trader or an institution such as a bank or fund-management company) operating a single profit-driven automated trading system that trades in competition with the N A − 1 other agents, in an electronic market operating a continuous double auction (CDA) with a limit order book (LOB: see e.g. [15])—which is the present-day situation in many of the world’s major financial markets. Each entity can in principle be adapting its strategy/behavior in real-time (e.g. using a machine learning mechanism) but is not required to do so. That is, an entity’s trading strategy can be non-adaptive if that is the more profitable option. Furthermore at any time the entity can elect to totally change the strategy that it is operating, modelling the case where a financial institution switches a trading algorithm that has previously been in development and testing (commonly referred to as a dev algo) into full use, the dev algo replacing the previously-running production algorithm (commonly known as the prod algo). Thus each trading entity in our model internally maintains a minimum of two strategies, each of which could potentially be adaptive: a prod algo and a dev algo. When the agent’s dev algo replaces the prod, a new dev algo is created and is subsequently tested and refined until there is sufficient evidence that it is an improvement on the agent’s current prod algo, at which point the dev again replaces prod and then another new dev is created. The trade-off between exploiting the prod algo and exploring the dev algo has manifest links to studies of multi-armed bandit problems (see e.g. [22]). We report here on the construction of two simulation models of this kind of system and on results from hundreds of thousands of simulated market sessions.1 Simulation modelling of financial markets very often involves populating a market mechanism with some number of trader-agents: autonomous entities that have “agency” in the sense that they are empowered to buy and/or sell items within the particular market mechanism that is being simulated: this approach, known as agentbased computational economics (ACE), has a history stretching back for more than 30 years, and much of the work in ACE studies of trading behaviours in models of financial markets owes a clear intellectual debt to work in experimental economics as pioneered by Vernon Smith (see e.g. [23]). Over the multi-decade history of ACE, a small number of specific trader-agent algorithms, i.e. precise mathematical and algorithmic specifications of particular trading strategies, have been frequently used for modelling various aspects of financial markets, and the convention that has emerged is to refer to each such strategy via a short sequence of letters, reminiscent of a stock-market ticker-symbol. Notable trading strategies in this literature include (in chronological sequence): SNPR [21], ZIC [14], ZIP [3], GD [13], RE [10], MGD [25], GDX [24], HBL [12], and AA 1
Keeping the published version of this paper to the required maximum page-count required us to omit several informative figures and many references. The full original version of this paper is freely available for download: see [2].
96
N. Alexandrov et al.
[27]; several of which are explained in more detail later in this paper. Of these, ZIC (invented by the economists Gode and Sunder [14]) is notable for being both highly stochastic and extremely simple, and yet it gives surprisingly human-like market dynamics; GD and ZIP were the first two strategies to be demonstrated as superior to human traders, a fact established in a landmark paper by IBM researchers [9], which is now commonly pointed to as initiating the rise of algorithmic trading in real financial markets; and until very recently AA was widely considered to be the best-performing strategy in the public domain. ZIC was the first instance of a zero intelligence trading strategy, which have proven to be surprisingly useful in ACE research: see, e.g., [16]. With the exception of SNPR and ZIC, all later strategies in this sequence are adaptive, using some kind of machine learning (ML) or artificial intelligence (AI) method to modify their responses over time, better-fitting their trading behavior to the specific market circumstances that they find themselves operating in. The supposed dominance of AA has recently been questioned in a series of publications which demonstrated AA to have been less robust than was previously thought. Most notably, [7] report on trials where AA is tested against two minimally simple algorithms that each involve no AI or ML at all: these two strategies are known as GVWY and SHVR [4, 5], and each share the pared-back minimalism of Gode and Sunder’s ZIC mechanism. In the studies that have been published thus far, depending on the circumstances, it seems (surprisingly) that GVWY and SHVR can each outperform not only AA but also many of the other AI/ML-based trader-agent strategies in the set listed above. Given this surprising recent result, there is an appetite for further ACE-style market-simulation studies involving GVWY and SHVR. One compelling issue to explore is the coevolutionary dynamics of markets populated by traders that can choose to play one of the three strategies from GVWY, SHVR, and ZIC, in a manner similar to that studied by [28] who employed replicator dynamics modelling techniques borrowed from theoretical evolutionary biology to explore the coevolutionary dynamics of markets populated by traders that could choose between SNPR, ZIP, and GD; each trader playing their chosen strategy for as long as it seems (to that trader) to be the most profitable strategy, and occasionally switching to (or “replicating”) use one of the other two strategies in the set if the current strategy appears (to that trader) to be weak. This replicator dynamics approach was also used in [27] to argue that AA was dominant over prior leading strategies, and in [26] to demonstrate that AA could in fact be dominated by other strategies. Replicator dynamics studies are typically limited to visualising and analysing the coevolutionary dynamics of simple, restricted systems where the restrictions are introduced to constrain the systems in such a way that they can be easily visualised and analysed. For instance, replicator dynamics studies often involve studying a population of agents that can switch between two, three, or at most four distinct pure strategies, and this decision often seems driven by the fact that visualisation of the dynamics, characterising the entire system dynamics, is often best done by reference to the system’s phase space, i.e. to plot some factor of interest for every possible state of the system. Let S be the set of distinct pure strategies that the agents in our system can choose between, let N S = |S| and si : i ∈ {1, . . . , Ns } refer to the ith of those
Exploring Coevolutionary Dynamics Between Infinitely Diverse …
97
strategies. Also let N A be the number of agents in the system, each of which makes a choice of some si ∈ S. Such a system can be characterised in full, all possible points in its finite phase space enumerated and plotted, by considering each possible combination of allowable strategy choices or assignments made by the population of agents: if all the N A agents have the same choice, and each can choose any of the N S strategies, then the number of possible system states, the number of points in its phase space, is N SN A , a number that may grow large but will forever be finite. When N S = 2, the system phase space can be characterised as points on a line, spanning from all agents playing s1 , through to a 50:50 mix of s1 :s2 , to all agents playing s2 . When N S = 3, the phase space can be characterised and visualised as points on the 2D unit simplex, an equilateral triangle where a point within or on the perimeter of the triangle represents a particular ratio of s1 :s2 :s3 , plotted in a barycentric coordinate frame. Technically, the one-dimensional (1D) line used for the phase-space of a N S = 2 system is a 2D unit simplex; the 3D unit simplex is a 2D triangle; and then the 4D unit simplex is a 3D object, a tetrahedron, the volume bounded by four planar faces each being an equilateral triangle. Higher-dimensional simplices are mathematically well-formed objects, but they are devils to visualise: try plotting the 40D unit simplex. Although the original authors do not explicitly state their reasons, it seems reasonable to conclude that each of [26–28] chose to study replicator dynamics systems in which N S = 3 and not any higher number because of the rapidly escalating difficulty of visualising the phase space for any higher value. Yet real-world markets do not involve all entities each selecting from a choice of two or three pure trading strategies, so there is then a major concern over the extent to which these studies adequately capture the much richer degree of heterogeneity in real-world markets: this brings to mind the old adage about the late-night drunkard looking for his lost house-keys under a streetlamp not because that is where he mislaid them, but because the light is better there. So, although one way of studying coevolutionary dynamics in markets where the traders can choose to either deploy GVWY, SHVR, or ZIC is to give each trader a discrete choice of one from that set of three strategies, so at any one time any individual trader is either operating according to GVWY or SHVR or ZIC, it is appealing to instead design experiments where the traders can continuously vary their trading strategy, exploring a potentially infinite range of differing trading strategies, where the space of possible strategies includes GVWY, SHVR, and ZIC. This is made possible by the recent introduction of a new minimal-intelligence trading strategy called PRZI [6]. PRZI’s trading behavior is determined by a strategy parameter s ∈ [−1, +1] ∈ R. When s = 0, the trader behaves identically to ZIC, and when s = ±1 it behaves the same as GVWY or SHVR. And, crucially, when a PRZI trader’s s-value is some other value, either part-way between −1 and 0 or partway between 0 and +1, its trading behavior is a kind of hybrid, part-way between that of ZIC and SHVR, or part-way between ZIC and GVWY. Because the PRZI strategy-parameter s is a real number, and its effect on the trading behavior is smooth and continuous, in principle any one PRZI trader can make microscopically small adjustments and hence the space of possible strategies available to a single PRZI
98
N. Alexandrov et al.
trader is infinite, and the phase-space of a market of N A agents is a bounded volume within R N A . In Sect. 2 we discuss our experiences in working with populations of coevolving PRZI traders, where we immediately come up against the limits of applicable visualisation techniques for this type of dynamical system. While markets of PRZI traders allow for continuous and infinite heterogeneity in the population of agents, the bounded nature of the PRZI strategy-space is a limitation that reduces the realism of the model. To address this, we have commenced work on an unboundedly infinite system, where each coevolving trader’s strategy can in principle grow to be arbitrarily complex and sophisticated (that is, in principle they can be anything that is expressible as a program in a Turing-complete list-based functional programming language), which we discuss in Sect. 3. For all our simulation studies reported here, we use the BSE simulator of a CDA market with a LOB (see [4, 5]), a mature open-source platform for ACE studies of electronic markets with automated trading.
2 Coevolution in a Bounded Infinite Space: PRZI Full details of our initial work with coevolving populations of PRZI traders are given in [1], which this section is only a very brief summary of. As a first illustration, we set up a minimal coevolutionary system, one in which only two of the traders could change their strategy by altering their PRZI s-value. Let’s refer to these two traders as T1 and T2 : the two are independent, so T1 can set its strategy value s1 regardless of the value s2 chosen by T2 , and vice versa. We set to T1 to be a buyer, and we set T2 to be a seller and hence, because any seller in the market needs to find a buyer as a counter-party and vice versa, the profitability of T1 ’s choice of s1 will be partially dependent on T2 ’s choice of s2 , and vice versa. We take the natural step of treating profitability as ‘fitness’ in the evolutionary sense, and hence this system is as simple as we can get while still being coevolutionary. For the adaptation process, each adaptive trader operates a simple Adaptive Climber (AC) algorithm defined in [1], which echoes the dev/prod development cycle discussed in the previous section: the trader maintains two separate strategies, to different PRZI s-values, referred to as P and D. P is initially set to some value, and D is set to a ‘mutated’ version of P, by adding a small random value (e.g. a sample from a uniform distribution over the range [−0.05, +0.05]). The AC method executes some number N T trades using strategy P and then executes N T trades using strategy D. After that, if the profitability of P is greater than that of D then the trader generates a new D; but if the profitability of D exceeds that of P then D is used to replace P, and then a new D is generated as a mutant value of the new P. That is, AC is a minimally simple two-point stochastic hill-climber algorithm. In [2] we show a 2D quiver-plot of the phase space for an instance of this system. The 2D quiver plot is made possible because we constrained our system to only have two adaptive traders. As soon as we relax that constraint and have all N A agents in our system adapting and coevolving against all the others, we need to make an N A -dimensional
Exploring Coevolutionary Dynamics Between Infinitely Diverse …
99
plot. Given that we routinely use N A values of 50 or more, and that 50-dimensional quiverplots are not easy to plot or understand, this mode of visualization runs out of steam as soon as N A gets to plausibly interesting numbers. An N A -dimensional coevolutionary market system is an instance of an N A dimensional dynamical system, and a popular method of characterising the dynamics of high-dimensional dynamical systems is the recurrence plot (RP: see e.g. [17]). This purely graphical technique can be extended by various quantitative methods known collectively as recurrence quantification analysis (RQA: see [29]). As is discussed at length in [1], we have explored the use of RPs and RQA for visualising and analysing our coevolutionary PRZI markets. In brief, for our purposes a RP visualization of an N -dimensional real-valued dynamical system is a rectangular grid of square binary pixels, i.e. pixels that are → in one of two states: often either black or white. Let − s (t) ∈ R N be the state of → the system at time t. A pixel is shaded black to represent that − s (t) has recurred at time t, i.e. has previously been seen at some earlier time t − t , and is shaded white otherwise. Recurrence can be defined in various ways, but the simplest is to → → take the N -dimensional Euclidean distance d = |− s (t) − − s (t − t )| and to declare recurrence to have occurred if d is less than some threshold value. The co-ordinates for each pixel, in a RP are set by its values of t and t − t . In our work with coevolving PRZI traders, merely by allowing each zerointelligence trader to have adaptive control of its single real-valued strategy parameter, for a market populated by N A such traders, we have an N A -dimensional phase space, a bounded hypercubic volume [−1, +1] N A ∈ R N A , and monitoring the system’s temporal evolution within that hypercube becomes immediately problematic. Analysis methods based on RPs and RQAs, an approach currently popular and productive in many fields, get us only so far toward our ultimate aim of being able to understand what the system is doing and where it is going (as documented in [1])— and, unfortunately, they do not get us far enough. While it is tempting to invest time and effort in developing better RP/RQA methods for analysis of the PRZI marketsystem’s phase-space trajectories in its subspace of R N A , the results we present in the next section cast doubt on whether that would actually be a useful thing to do. There, we discuss the consequences of taking a second small step in the direction of greater realism: one in which the space of possible strategies is still infinite, but is also unbounded. Once we get there, RP/RQA analysis totally runs out of steam.
3 Coevolution in an Unbounded Infinite Space: STGP While the work discussed in the previous section is illuminating, our PRZI-market model can be criticised for its lack of realism in the sense that each adaptive PRZI trader is constrained to play a zero-intelligence strategy that is either ZIC, GVWY, SHVR, or some intermediate hybrid mix: traders in the coevolving PRZI market are never even going to play a more sophisticated minimal-intelligence strategy like AA, GDX, or ZIP. But our work is motivated by the observation that in real-world
100
N. Alexandrov et al.
coevolving markets, the trading entities are not constrained to select between a fixed number of existing pure strategies, and nor are they constrained to choose a point in some continuous subspace that includes specific pure strategies as special cases. In real markets, any entity at any time is free to invent its own strategy or to alter/extend an existing one. Our work with PRZI has revealed some of the issues of visualising and analysing such systems, but the bounded nature of its R N A subspace means that it can never show the kind of coevolutionary dynamics of the class of system that we seek to ultimately address in our work. Thus, we need a model in which the space of strategies is not only infinite but also unbounded. In this section, we briefly describe early results from ongoing work in which each entity does have the freedom to adapt by innovating, by creating wholly new strategies, and in which the space of possible strategies is unbounded and hence infinite. Genetic Programming (GP: see e.g. [20]) is a form of evolutionary computing in which a genetic algorithm operates on ‘genomes’ that are encodings of programs in a list-based functional language such as LISP or Clojure. Starting with an initial population of programs P1 , each of the N p individuals i in P1 is evaluated via a fitness function which assigns a scalar fitness value f i (t) to that individual. When all N p individuals in P1 have been evaluated and assigned a f i (t) value, a new population of N p individuals is created by a process of breeding where pairs of individuals in P1 are selected with a probability proportionate to their fitness (so fitter individuals are more likely to be selected for breeding) and one or more children are created that have genomes which inherit from the pair of parents in ways inspired by real-world sexual reproduction with mutation. In this way, the population of new children P2 becomes the next generation of the system; the old population P1 is typically discarded, each individual in P2 then has its fitness evaluated, and the next generation P3 is then bred from P2 ’s fitter members: if this process is repeated for sufficiently many generations, and if hyperparameters such as the mutation rate are set correctly, then useful novel programs can be created by the ‘Blind Watchmaker’ of Darwinian evolution. To illustrate this, consider a simple functional language that allows for expressions computable by a four-function pocket calculator, where multiplication has the symbol M, division has D, subtraction S, and addition A. The expression (3 × 2) + (10 ÷ (5 − 3)) (which evaluates to 11) could be written in a list-based style as (A, (M, 3, 2), (D, 10, (S, 5, 3))), and can be visualised as a tree structure, as illustrated in [2]. In our work we are using a variant of GP known as strongly-typed genetic programming (STGP), where data-type constraints are enforced between connecting nodes of a program trees [19]. Now each entity in our model market, rather than using the Adaptive Climber algorithm to optimize a single numeric si strategy value, instead uses a STGP process to create new programs that implement trading strategies: we start with a population seeded with minimally simple programs, and then we unleash them, allowing the coevolutionary process to proceed, during which each entity is at liberty to create programs of growing complexity and sophistication, if in doing so they generate greater profits. Full details of our STGP work are given in [11], to which the reader is referred for further detail; here we present only the briefest of results, from a single successful
Exploring Coevolutionary Dynamics Between Infinitely Diverse …
101
experiment, to motivate discussion of the problems of visualization and analysis that arise when working in this unbounded infinite space of possible programmatic trading strategies. As an initial exploration into the dynamics of the STGP traders coevolving in BSE, a simulation was run over 40 generations for 10000 units of time. 100 ZIC sellers were run against 50 ZIC and 50 STGP buyers; both buyers and sellers were regularly replenished with fresh “customer orders” (i.e., an instruction to buy or to sell, and an associated private limit price for that transaction) to execute. The STGP traders ∗ , 1), λi,c )), were each initialised with a price-improvement expression of (S(S(Psame ∗ where S is the subtraction operator, Psame is the best price on the same side of the LOB as the trader, and λi,c is the limit price for this customer order c to be executed by trader i. This expression represents the zero-intelligence SHVR trader, expressed in STGP tree form. Summary results, a plot of profit-values in each generation, are shown in Fig. 1. As can be seen, the profitability data are biphasic: there is an initial brief phase of rapid growth in profitability; followed by a prolonged phase where profitability steadily declines. The initial rise in profitability is as would be expected, and hoped for: the STGP coevolution is discovering ever more profitable trading strategies over successive early generations. The second phase, where profits are steadily eroded, is perhaps less expected and less desired, but can readily be explained by the competitive coevolutionary process progressively eating away at profits: if one SHVR-like trader is profitable by shaving 2¢ off of the best price on each revision, then it can be beaten to the deal by another SHVR-like trader who instead shaves 3¢; but that trader could in turn be beaten by a SHVR-style trader who instead shaves 4¢ off the best price,
Fig. 1 Results from an illustrative successful STGP experiment: horizontal axis is generationnumber; vertical axis is profit. At each generation the maximum profit achieved by a STGP trader is plotted, along with the mean profitability of all 50 STGP traders; error-bars show ±1 standard deviation around the mean
102
N. Alexandrov et al.
Table 1 Selected STGP genomes for the best individual in the population at various generations in the experiment illustrated in Fig. 1: see text for discussion Gen Expression tree 1 2 3 4 .. .
∗ ,1), λ ) (S,(S,Pbest i,c ∗ ,1),1) (S,(S,Pbest ∗ ,1),1) (S,(S,Pbest ∗ ,1),1),1) (S,(S,(S,Pbest .. .
26 27 28 29 30
∗ ,1),7),1),1),7),1),7),7),1) (S,(S,(S,(S,(S,(S,(S,(S,(S,Pbest ∗ ,1),7),1),7),7),1),1),7),1) (S,(S,(S,(S,(S,(S,(S,(S,(S,Pbest ∗ ,1),7),1),1),7),1),7),7),1) (S,(S,(S,(S,(S,(S,(S,(S,(S,Pbest ∗ ,1),7),7),1),1),7),1),1),7),1) (S,(S,(S,(S,(S,(S,(S,(S,(S,(S,Pbest ∗ ,1),7),7),1),1),7),1),1),7),1) (S,(S,(S,(S,(S,(S,(S,(S,(S,(S,Pbest
and so on: price-competition among the coevolving traders awards higher fitness to those individuals that get more deals by shaving greater amounts off of the current best price on the LOB, but in doing so the most successful cut their margins ever smaller, eventually hitting a zero margin at which point they are playing not SHVR but GVWY. Table 1 shows the genome of the elite (most profitable) trader in a selection of generations from the experiment illustrated in Fig. 1. There are two things to note in the genomes shown here. First, STGP (and vanilla GP too) frequently suffers from bloat, creating viable expressions or programs that get the job done, but which are expressed in very verbose form: for example, the elite individual at generation ∗ − 1 − 7 − 7 − 1 − 1 − 7 − 1 − 1 − 7 − 1, 30 has a genome that translates to Pbest ∗ − 34 (i.e., as which any competent programmer would immediate rewrite as Pbest ∗ a shortened genome of (S, Pbest , 34)). Second, because the functional languages used in (ST)GP are richly expressive (that is, the same algorithm or expression can be written in many different ways), the use of methods based on recurrence plots (RPs) becomes deeply problematic: the recurrence of any one particular strategy that had occurred earlier in the evolutionary process may be difficult to automati∗ , 34) at generation 30, and cally detect. For instance, if the elite genome is (S, Pbest ∗ ∗ , 44), −10) at generais (S, (S, Pbest , 14), 20) at generation 60, and is (S, (S, Pbest tion 60, then we humans can see by inspection that the same strategy is recurring every 30 generations, but an automated analysis technique would need to go beyond the lexical/syntactic dissimilarity in these expressions and instead reason about the underlying semantics of the functional programming language. For the simple mathematical expressions being discussed here, it is reasonable to operationally reduce them each to some agreed canonical form, but for only slightly more sophisticated (and stateful) algorithms such as AA, GDX, or ZIP, a many-to-one mapping, a reduction of all possible implementations, all possible expressions, of that algorithm down
Exploring Coevolutionary Dynamics Between Infinitely Diverse …
103
to a single canonical form is unlikely to ever be achievable. And so, RP-based methods cease to have any applicability here too. Once again, we take a small step in the direction of increased realism in our coevolutionary models, and the visualization/analytics tool-box is empty.
4 Discussion and Conclusion The experiments and results that we have described here have demonstrated that, when we move our ACE-style market models ever so slightly in the direction of being closer to real-world markets, we find that the toolbox for visualisation and analysis of the resultant system dynamics starts to look very empty. While it is relatively easy to make the changes necessary to extend existing models to make them more realistic, it is relatively hard to work out what the extended systems are actually doing, and hence we need new tools to help us do that. Our current work is concentrated on exploring the use of Ciao Plots [8] in characterising the coevolutionary dynamics of our STGP system. While many research papers in science and engineering are written to describe the solution to some problem, this is not one of those papers. Instead, this is a paper that describes a problem in need of a solution. Or, more specifically, a problem that we expect to be tackled from multiple perspectives, one that eventually yields to multiple complementary solutions. In future work, we intend to develop novel visualisation and analysis techniques for coevolutionary market systems with unboundedly infinite continuous strategy spaces, which we will report on in due course; but in writing this paper we hope to encourage other researchers to work on this challenging problem too. To facilitate that, we have made our Python source-code freely available as opensource releases on GitHub, which is where in future we will also release our own visualisation and analysis methods as we develop them.2
References 1. Alexandrov, N.: Competitive arms-races among autonomous trading agents: exploring the coadaptive dynamics. Master’s thesis, University of Bristol (2021) 2. Alexandrov, N., Cliff, D., Figuero, C.: Exploring coevolutionary dynamics of competitive arms-races between infinitely diverse heterogenous adaptive automated trading agents (2021). SSRN: 3901889
2
The Python code in the main BSE GitHub repository [4] has been extended by addition of a minimally simple adaptive PRZI trader, a k-point stochastic hill climber, referred to as PRZI-SHCk (pronounced prezzy-shuck), for which the k = 2 case is a close relative of the AC algorithm described in Sect. 2 and which can readily be used for studies of coevolutionary dynamics. The source-code for our STGP work is available separately at https://github.com/charliefiguero/stgptrader/.
104
N. Alexandrov et al.
3. Cliff, D.: Minimal-intelligence agents for bargaining behaviours in market-based environments. Technical Report HPL-97-91, HP Labs Technical Report (1997) 4. Cliff, D.: Bristol stock exchange: open-source financial exchange simulator (2012). https:// github.com/davecliff/BristolStockExchange 5. Cliff, D.: BSE: a minimal simulation of a limit-order-book stock exchange. In: Bruzzone, F. (ed.) Proceedings of 30th European Modeling and Simulation Symposium (EMSS2018), pp. 194–203 (2018) 6. Cliff, D.: Parameterized-response zero-intelligence traders (2021). SSRN: 3823317 7. Cliff, D., Rollins, M.: Methods matter: a trading algorithm with no intelligence routinely outperforms AI-based traders. In: Proceedings of IEEE Symposium on Computational Intelligence in Financial Engineering (CIFEr2020) (2020) 8. Cliff, D., Miller, G.: Visualizing coevolution with CIAO plots. Artif. Life 12(2), 199–202 (2006) 9. Das, R., Hanson, J., Kephart, J., Tesauro, G.: Agent-human interactions in the continuous double auction. In: Proceedings of IJCAI-2001, pp. 1169–1176 (2001) 10. Erev, I., Roth, A.: Predicting how people play games: reinforcement learning in experimental games with unique, mixed-strategy equilibria. Am. Econ. Rev. 88(4), 848–881 (1998) 11. Figuero, C.: Evolving trader-agents via strongly typed genetic programming. Master’s thesis, University of Bristol Department of Computer Science (2021) 12. Gjerstad, S.: The impact of pace in double auction bargaining. Technical report, Department of Economics, University of Arizona (2003) 13. Gjerstad, S., Dickhaut, J.: Price formation in double auctions. Games Econ. Behav. 22(1), 1–29 (1998) 14. Gode, D., Sunder, S.: Allocative efficiency of markets with zero-intelligence traders: market as a partial substitute for individual rationality. J. Polit. Econ. 101(1), 119–137 (1993) 15. Gould, M., Porter., M., Williams, S., McDonald, M., Fenn, D., Howison, S.: Limit order books. Quant. Financ. 13(11), 1709–1742 (2013) 16. Ladley, D.: Zero intelligence in economics and finance. Knowl. Eng. Rev. 27(2), 273–286 (2012) 17. Marwan, N.: How to avoid potential pitfalls in recurrence plot based data analysis. Int. J. Bifurc. Chaos 21(4), 1003–1017 (2011) 18. Maynard Smith, J.: Evolution and the Theory of Games. Cambridge University Press, Cambridge (1982) 19. Montana, D.: Strongly typed genetic programming. Evol. Comput. 3(2), 199–230 (1995) 20. Poli, R., Langdon, W., McPhee, N.: A Field Guide to Genetic Programming. Lulu, Morrisville (2008) 21. Rust, J., Miller, J., Palmer, R.: Behavior of trading automata in a computerized double auction market. In: Friedman, D., Rust, J. (eds.) The Double Auction Market: Institutions, Theories, and Evidence, pp. 155–198. Addison-Wesley, Boston (1992) 22. Slivkins, A.: Introduction to multi-armed bandits (2021). arXiv:1904.07272v6 23. Smith, V.: Papers in Experimental Economics. Cambridge University Press, Cambridge (1991) 24. Tesauro, G., Bredin, J.: Sequential strategic bidding in auctions using dynamic programming. In: Proceedings of AAMAS 2002 (2002) 25. Tesauro, G., Das, R.: High-performance bidding agents for the continuous double auction. In: Proceedings of 3rd ACM Conference on Electronic Commerce, pp. 206–209 (2001) 26. Vach, D.: Comparison of double auction bidding strategies for automated trading agents. Master’s thesis, Charles University in Prague (2015) 27. Vytelingum, P., Cliff, D., Jennings, N.: Strategic bidding in continuous double auctions. Artif. Intell. 172(14), 1700–1729 (2008) 28. Walsh, W., Das, R., Tesauro, G., Kephart, J.: Analyzing complex strategic interactions in multiagent systems. In: Proceedings of the AAAI Workshop on Game-Theoretic and DecisionTheoretic Agents (2002) 29. Webber, C., Marwan, N. (eds.): Recurrence Quantification Analysis: Theory and Best Practice. Springer, Berlin (2015)
Pay-for-Performance and Emerging Search Behavior: When Exploration Serves to Reduce Alterations Friederike Wall
Abstract Prior research suggests that the fit between task complexity and incentive schemes like pay-for-performance positively affects organizational performance. This study goes a step further and seeks to investigate how different types of payfor-performance affect subordinates’ search behavior for novel solutions to complex decision problems. Based on NK fitness landscapes, the study employs an agentbased simulation with subordinate decision-makers individually adapting their search behavior via reinforcement learning. The results suggest that the emerging search strategy is subtly shaped by the (mis-)fit between task complexity and incentive structure. In particular, the results indicate that search behavior may arise for different “reasons” ranging from fostering new solutions to even preventing alterations. Keywords Agent-based simulation · Complexity · Incentives · Learning · Search behavior
1 Introduction and Related Research In organizations, reward systems like “pay-for-performance” are prominent coordination mechanisms for affecting decision-makers’ behavior to support organizational objectives. A considerable body of research suggests that the incentives provided to subordinate decision-makers should be adjusted to the complexity of an organization’s overall task which refers to the fundamental organizational tension of “differentiation versus integration” [1]. When an organization faces a complex task with interactions between sub-problems, overall superior solutions cannot be found by solving the sub-problems separately. An interesting question is how a given combination of task complexity and incentive scheme affects the behavior of subordinate decision-makers when searching for superior solutions which refers to the second fundamental tension of “exploration versus exploitation” [2] in organizational thinkF. Wall (B) University of Klagenfurt, Klagenfurt 9020, Austria e-mail: [email protected] URL: https://www.aau.at/csu © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_9
105
106
F. Wall
ing: Is subordinate managers’ search affected towards incremental improvements around known solutions (exploitation), or are they encouraged to make longer jumps in favor of novel solutions (exploration)? For the adjustment of the incentive system to the task environment, Bushman et al. [3] combine a closed-form analysis based on principal-agent theory with an empirical study. They find that the use of aggregate performance measures as a value base for rewards increases in intra-firm interdependencies. Studies employing agent-based simulations—and, thus, relaxing some of the behavioral assumptions captured in Bushman et al.’s model—provide support for this finding. Siggelkow and Rivkin [4] find that incentives based on an aggregated level promote higher firm performance when task complexity is high than obtained with rewarding units’ performance. Wall [5] studies the emergence of incentive systems for differently complex task environments combined with other coordination mechanisms. The results support Bushman et al. [3], though it turns out that other coordination mechanisms may serve as a substitute for rewards based on aggregate performance metrics. For the effects of the incentive structure on search behavior, Manso [6] introduces a model based on the principal-agent framework, where exploitation and exploration are afflicted with different search costs for the agent who is uncertain about the distribution of payoffs of new solutions. The results suggest that the optimal innovation-motivating incentive scheme exhibits substantial tolerance for early failure and reward for longterm success, which broadly corresponds to the empirical study of Lerner et al. [7] on effects of incentives provided to employees in R&D. However, there are advocates in favor of a broader understanding of the search behavior considering contingencies like task environment and other institutional arrangements (e.g., [8, 9]). Moreover, behavioral assumptions about economic agents incorporated in the principal-agent theory—the prevailing paradigm for studying incentive systems—are in the focus of discussions.1 Against this background, this paper studies the effects of different types of payfor-performance on subordinates’ search behavior for novel solutions of complex decision problems. The study employs an agent-based simulation model in the spirit of Agent-based Computational Economics—including the behavioral assumptions about economic agents [12]. The model builds on Wall [16] and uses the framework of NK fitness landscapes [13, 14] as broadly employed for studying search processes in organizations (for overviews, e.g., [15]). The following section outlines the simulation model before Sect. 3 provides an overview of the simulation experiments. In Sect. 4, results are presented and discussed, followed by some concluding remarks.
1
For example, in the “standard” hidden action model of principal-agent theory, both contracting parties know the entire space of possible effort levels which the agent could employ [10]. Hence, particular search processes to discover new options within the solution space are not required [11, with further references].
Pay-for-Performance and Emerging Search Behavior …
107
2 Outline of the Simulation Model In the simulations, hierarchical organizations, comprising properties of “differentiation and integration” [1], search for higher performance levels with subordinate decision-makers adapting their search behavior based on learning via reinforcement. The modeling of the hierarchical structure (Sect. 2.1) and the incentive structure for subordinate decision-makers (Sect. 2.2) follows Wall (see [16] for a more extensive description): The organizations comprise two types of decision-making agents: (1) one headquarters and (2) M departmental managers, each of which is the head of the respective department (or unit) r = 1, . . . , M. In line with Simon’s [17, 18] behavioral assumptions, unit heads suffer from some cognitive limitations. In particular, they cannot survey the entire search space and, hence, cannot “locate” the optimal solution of their partial decision problem “at once” but have to search stepwise for superior solutions. At his point, two model extensions (compared to [16]) come into play: Unit heads may employ (1) heterogeneous search strategies (Sect. 2.3) which they (2) adapt to based on reinforcement learning (Sect. 2.4).
2.1 Structure and Decision Problem of the Organizations In line with the NK-framework [13, 14], at each time step t, the organizations face an N -dimensional binary decision problem, i.e., dt = (d1t , . . . , d N t ) with dit ∈ {0, 1}, i = 1, . . . , N , out of 2 N different binary vectors possible. Each of the two states dit ∈ {0, 1} provides a distinct contribution Cit to the overall performance V (dt ). The contributions Cit are randomly drawn from a uniform distribution with 0 ≤ Cit ≤ 1. Though randomly drawn, the contributions Cit are a function of choices and interactions among choices: Cit = f i (dit ; di1 t , . . . di K t )
(1)
with {i 1 , . . . , i K } ⊂ {1, . . . , i − 1, i + 1, . . . , N }. Hence, parameter K (with 0 ≤ K ≤ N − 1) captures the complexity of the decision problem in terms of the number of those choices d jt , j = i which also affect the performance contribution Cit of choice dit . In case of no interactions among choices, K equals 0, and K = N − 1 for the maximum level of complexity where each single choice i affects the performance contribution of each other binary choice j = i. The overall performance Vt achieved in period t—which reflects the objective of the headquarter—results as normalized sum of contributions Cit from Vt = V (dt ) =
N 1 Cit . N i=1
(2)
108
F. Wall
The organizations make use of division of labor. In particular, the N -dimensional overall decision problem is decomposed into M disjoint partial problems, and each of these sub-problems is delegated to one department r . Shaped by the level K and the structure of interdependencies, indirect interactions among the departments may result (e.g., Fig. 1b). Let K ex denote the level of cross-unit interdependencies. In case that cross-unit interdependencies exist, i.e., if K ex > 0, then the performance contribution of department r ’s choices to overall performance V is affected by choices made by other units q = r and vice versa.
2.2 Incentive Structure The department heads seek to maximize compensation which is merit-based and, for the sake of simplicity, depends linearly on the value base Ptr (dt ). Hence, unit head r prefers that option dt available at time step t which promises the highest value base for compensation Ptr (dt ) which is—similar to [4]—given by Ptr (dt ) = Ptr,own (dtr ) + α · Ptr,r es
(3)
N r −1 m where Ptr,own (dtr ) = N1 i=1+w Cit (with w = rm=1 N for r > 1 and w = 0 for r = 1) is the performance obtained from those decisions assigned to manager r . q,own M gives the performance achieved in the “rest” of the orgaPtr,r es = q=1,q =r Pt nization, i.e., from decisions of the other managers. Hence, with α = 0, departmental performance is rewarded while with α = 1 manager r ’s compensation, in fact, depends on the performance Vt (Eq. 2) of the entire organization. In every time step t, each manager r seeks to identify the best configuration for the “own” choices dtr out of the currently available options with respect to the value base of compensation. The options each manager can choose from are shaped by the manager’s search strategy being subject to learning.
2.3 Search Strategies and Formation of Expectations Due to cognitive limitations, managers cannot “locate” the optimal solution of their decision problem “at once”, but have to search stepwise for superior solutions. In and dr,a2 for the every time step t, each manager r discovers two alternatives dr,a1 t t r∗ and partial decision problem compared to the status quo dt−1 . The alternatives dr,a1 t r,a2 dt a unit head discovers are shaped by the search strategy being subject to the manager’s learning. At a certain time step t, unit manager r pursues a search strategy str out of a set of feasible strategies s a ∈ S with S = sex ploit , sambi , sex plor e which differ in the Hamming distances of the newly discovered options to the status quo: In
Pay-for-Performance and Emerging Search Behavior …
109
the exploitative type (named sex ploit ), the Hamming distances of the two alterN r r ∗ r,a1 native options to the status quo equal 1 (i.e., h(dr,a1 ) = i=1 dt−1 − dt = 1; h(dr,a2 ) = 1). In a purely explorative strategy (denoted sex plor e ), the Hamming distances of the two alternatives h(dr,a1 ), h(dr,a2 ) = 2. Ambidextrous search (sambi ) captures the case of h(dr,a1 ) = 1 and h(dr,a2 ) = 2. However, when forming their preferences, the unit heads show some cognitive limitations: First, unit head r cannot anticipate the other managers’ q = r choices; instead, manager r assumes that the fellow managers will stay with the status quo, q∗ i.e., opt for dt−1 . Second, unit heads are not able to perfectly ex-ante evaluate the effects of their newly discovered options’ dr,a1 and dr,a2 on their respective value t t r r 2 base of compensation Pt (dt ) (see Eq. 3). Rather, ex-ante evaluations are afflicted with noise which is, for the sake of simplicity, a relative error imputed to the true performance [16, 19]. The error terms follow a Gaussian distribution N (0; σ ) with expected value 0 and standard deviations σ r,own and σ r,r es ; errors are assumed to be independent from each other. The ex ante perceived value base of compensation P˜tr (dt ) of manager r is given by P˜tr (dt ) = P˜tr,own (drt ) + α · P˜tr,r es with P˜tr,own (drt ) = Ptr,own (drt ) + er,own (drt )
(4)
es es es ) = Ptr,r es (dr,r ) + er,r es (dr,r ). P˜tr,r es (dr,r t t t
(5)
With this, each department head r has a distinct “view” of the fitness landscape, which results (1) from decomposition into and delegation of sub-problems, and (2) from the unit heads’ individual “perceptions”: Even if firm performance is rewarded (i.e., if α = 1 in Eq. 3), due to the individualized error terms σ r,own and σ r,r es every decision-maker has a distinct perception of a configuration dt .
2.4 Learning-Based Selection of Search Strategies For capturing the emergence of managers’ search behavior, the model employs a simple form of reinforcement learning [20]. In particular, in every T L th period, each r a unit head r chooses a searchstrategy s out of the set of feasible strategies s ∈ S with S = sex ploit , sambi , sex plor e where, for the sake of simplicity, the set S of strategies is the same for every unit (see Sect. 2.3). Hence, the search strategies employed at a certain time step t may differ across the units. According to reinforcement learning, the probabilities of options to be chosen for the future are updated according to the—positive or negative—stimuli resulting from these options in the past. Let pr (s a , t) denote the probability3 of a search strategy ∗ , and, from this, can Unit head r remembers the compensation resulting from the status quo drt−1 infer its actual performance Ptr if the status quo is kept. 3 0 ≤ pr (s a , t) ≤ 1 and r a r a s a ∈S ( p (s , t)) = 1 apply to probabilities p (s , t). 2
110
F. Wall
s a to be chosen at time t for the next T L periods by department r . Manager r ’s update of the strategies’ probabilities depends on whether the manager regards the (eventual) change of the value base of compensation Ptr (dt ) (Eq. 3) obtained under the regime of a certain search strategy s r (t) in the previous T L periods positive or negative, i.e., whether, or not, it at least equals an aspiration level vr . In particular, via the compensation obtained in the end of each time step t, unit head r receives information about the actual value base of compensation Ptr (dt ) (Eq. 3). From this, manager r computes the actual relative change Ptr =
Ptr −P r
t−T L
Pr
t−T L
of the value base
L
of compensation achieved within the last T periods. Hence, for unit head r the stimulus τ r (t) = 1, if the aspiration level is achieved or exceeded (i.e., Ptr ≥ vr ) and τ r (t) = −1 otherwise. The probabilities pr (s a , t) of options s a ∈ S for unit r are updated according to the following rule [20] where λ (with 0 ≤ λ ≤ 1) reflects the reinforcement strength:
pr (s a , t + 1) = pr (s a , t) +
⎧ λ · (1 − pr (s a , t)) if s a = str ∧ τ (t) = 1 ⎪ ⎪ ⎪ ⎪ ⎨ −λ · pr (s a , t) if s a = str ∧ τ (t) = −1 −λ · pr (s a , t) ⎪ ⎪ ⎪ ⎪ ⎩ pr (s a ,t)· pr (s r ,t) λ · 1− pr (s r ,t)t t
if s a = str ∧ τ (t) = 1
(6)
if s a = str ∧ τ (t) = −1
Based on the updated probabilities given in Eq. 6, the units’ search strategy for periods t + 1 to t + T ∗ is determined.
3 Simulation Experiments and Parameter Settings This study’s research question boils down to which search behavior emerges for which incentives (as “pay-for-performance”) in a particular task environment. Following the idea of factorial design of simulation experiments [21], a 2 × 2 experimental design concerning task environment and incentive structure is employed: Concerning the incentive structure, the organizations may only reward departmental performance (α = 0), in which case they do not make use of cross-unit coordination via incentives. Alternatively, the organizations could make use of coordination by rewarding firm performance (α = 1). Regarding the task environment, the organizations have either a perfectly decomposable task structure (K ex = 0, Fig. 1a). For example, the overall task may be decomposable along geographical regions or products without any interdependencies across regions or products, respectively. Alternatively, the organizations may show a rather high level of cross-unit interactions (K ex = 5, Fig. 1b). Hence, each choice not only shows maximal internal interdependencies, but also interacts “externally” with more than half of the choices of the other decision-making units. This could be caused by certain constraints of resources (budgets or capacities), by market interactions (product pricing affecting each other),
Pay-for-Performance and Emerging Search Behavior …
111
Fig. 1 Decomposable and non-decomposable interaction structure Table 1 Parameter settings Parameter Values/Types Observation period Simulation runs Number of choices Departments Interaction structures Incentive scheme Error of evaluation Search strategies Interval of learning Aspiration level Learning strength
T = 500 Per scenario 2,500 runs with 10 runs on 250 distinct landscapes N = 12 M = 4, i.e., r = (1, . . . , 4) with d1 = (d1 , d2 , d3 ), d2 = (d4 , d5 , d6 )d3 = (d7 , d8 , d9 ), d4 = d10 , d11 , d12 ) Decomposable: K = 2; K ex = 0 (Fig. 1.a); Non-decomposable: K = 7; K ex = 5 (Fig. 1b) Departmental performance rewarded: α = 0; Firm performance rewarded: α = 1 Own performance: σ r,own = 0.05 ∀ r and Residual performance: σ r,r es = 0.15 ∀ r “exploitation”: sex ploit = (1, 1); “ambidextrous”: sambi = (1, 2); “exploration”: sex plor e = (2, 2) (see Sect. 2.3) T L = 10 vr = 0 ∀ r λ = 0.5
or functional interrelations (e.g., between product design and procurement) [22, with further references]. Some further parameter settings (see Table 1) deserve a comment. The organizations face an N = 12-dimensional decision problem decomposed into M = 4 equal-sized sub-problems and assigned to departments accordingly. The organizations search for 500 periods for superior performance and, in every T L = 10th
112
F. Wall
period,4 managers eventually switch their search strategies out of the given set S introduced in Sect. 2.3. The aspiration levels of zero capture to avoid the risk of not sustaining an already achieved performance level. The departments’ error levels of 0.05 for “own” and 0.15 for “residual” performance reflect the idea of specialization (i.e., more precise forecasts for the own domain) and are in line with some empirical evidence according to which error levels around 10 percent may be an realistic estimation [23].
4 Results and Discussion 4.1 On the Analysis of Experiments The very core of this paper is to study which overall search pattern of the organization emerges for the alternative incentive schemes in conjunction with task complexity at the end of the observation time. For this, it is helpful to group the simulation runs according to the prevailing “character of search strategy” that emerges from units’ learning taking the combinatorial variety into account: In particular, with four units and three types of search strategies, 34 = 81 combinations of search types could show up—each with an initial probability of 1/81 ≈ 1.23%, i.e., before the learningbased update of probabilities. For our analysis, merely the number of units adopting a particular strategy is relevant, while it is not relevant which particular units choose which search strategy (e.g., whether units 1 and 2 choose an exploitative and units 3 and 4 an explorative strategy or vice versa). Hence, the 81 possible combinations can be aggregated to 15 different clusters.5 However, these clusters differ in their initial probability. For example, in the cluster “C 2-1-1 (12)” two units out of four follow an exploitative strategy; one unit pursues an ambidextrous and another an explorative strategy. This is the case in 12 of the 81 possible combinations. Hence, “C 2-1-1 (12)” has an initial probability of 14.81%. In contrast, in cluster “C 0-0-4 (1)” all units employ explorative search, representing 1 of the 81 combinations possible. For a more condensed pattern of search at the organizational level, the 15 clusters are further grouped—according to their predominating character of being more of an exploitative, ambidextrous, or explorative type (for details, see horizontal axes in Fig. 2). Each of these aggregated clusters has the same initial probability of 33%. Their frequencies at the end of the learning processes are of particular interest. This is reported in Table 2. The table also informs about the respective final performances 4
Pretests indicated that the results do not principally change for a longer observation period. The similar holds for longer learning intervals (e.g., T L = 20); however, shortening the learning period notably below ten periods does not leave the search strategies “enough time” to unfold their particular potential. 5 The term “cluster” is employed to indicate a group of simulation runs which show a similar combination of units’ search strategies in the last observation period. The term is not used in terms of a cluster analysis.
Pay-for-Performance and Emerging Search Behavior …
113
Fig. 2 Emerging (cluster of) search behavior for different incentive schemes and task environments. For parameter settings see Table 1 Table 2 Condensed results for scenarios of incentive schemes and interaction structures. Each row represents results of 2,500 simulation runs Scenario
Frequency of cluster of search strategy in T = 500
(Weighted) Final performance*
Ratio of periods with alterations of d (false positive)
exploit. ambid. explor.
exploit. ambid. explor.
exploit.
ambid.
explor.
Departmental performance rewarded a. decomp.
36.5%
30.0% 33.5%
0.9913 0.9922 0.9911
struc. (D/D) b. non-decomp. 38.1%
29.4% 32.5%
0.8546 0.8366 0.8317
struc. (D/ND)
4.4%
2.8%
4.1%
(1.9%)
(1.1%)
(1.8%)
51.1%
53.2%
52.4%
(25.8%)
(26.7%)
(26.5%)
Organizational performance rewarded c. decomp.
31.8%
26.2% 41.9%
0.8829 0.8740 0.8729
struc. (O/D) d. non-decomp. 29.8% struc. (O/ND)
31.4% 38.7%
0.8819 0.8878 0.8853
77.8%
88.2%
76.4%
(38.7%)
(43.9%)
(38.0%)
10.1%
10.1%
10.4%
(5.0%)
(5.0%)
(5.1%)
114
F. Wall
(Eq. 2) and the ratio of periods with (false positive) altered solutions of the organization’s decision problem d. Figure 2 displays the final frequencies of clusters (bar chart) and normalized to initial probabilities (overlaying line chart).
4.2 Overview of Results According to the 2 × 2 design, four combinations of reward structure (departmental vs. organizational performance rewarded) and task environment (decomposable vs. non-decomposable) are studied—in the following named briefly “D/D”, “D/ND”, “O/D” and “O/ND”. Table 2 and Fig. 2 report on the results. The performance levels obtained are in line with prior research [3–5]6 : Correspondence of the reward structure with intra-firm interactions is favorable, i.e., rewarding departmental (organizational) performance when inter-departmental complexity is low (high). As Table 2 reports, the performance levels obtained with (a) D/D go beyond the levels of (c) O/D, and (d) O/ND leads to higher performance levels than (b) D/ND. For the search behavior, rewarding departmental performance (D/D and D/ND) leads to a slight predominance of exploitation. In contrast, exploration prevails when organizational performance is remunerated (O/D and O/ND). Moreover, Table 2 shows that when task complexity and reward structure correspond to each other (i.e., D/D and O/ND), those clusters of search prevail, which induce the highest level of alterations. In contrast, for the “not-corresponding” scenarios (D/ND and O/D), the clusters inducing lowest levels of alterations predominate.
4.3 Discussion of Scenarios For a closer analysis, recalling the two drivers of units’ behavior captured in the model is helpful. These are (1) avoiding a decline in compensation as captured in the aspiration level for learning about search strategy, and (2) maximizing compensation in decision-making on dr where the search strategy shapes the options to choose from. Abandoning the status quo in favor of only slightly differing options (exploitation) may slow down performance increases. However, the peril that interactions eventually may heavily affect performance is smaller than with longer jumps (exploration), which, however, may provide the chance of higher gains. The combinations of the task environment, incentives, and search strategy affect these motives differently, as discussed in the following. We start with the two scenarios D/D and O/ND, i.e., those where the task environment’s complexity and the incentive structure correspond (“fit”). In the D/D scenario, the incentive structure induces that the unit heads only focus on their parochial perfor6
An extensive analysis including average number and performance of local peaks of numerous interaction structures based on the NK framework is provided in [22].
Pay-for-Performance and Emerging Search Behavior …
115
mance. This corresponds to the decomposable interaction structure since—without interactions among units—other units’ choices do not interfere. Hence, a search strategy that slowly increases performance but with a low risk of performance losses may best suit the motives of unit heads’ behavior so that exploitative clusters slightly prevail (see Fig. 2a). In the O/ND scenario, cross-unit interactions occur and, from a unit head’s perspective, the incentive scheme correspondingly allows for substituting departmental with residual performance: In a manager’s compensation, a higher performance achieved in the rest of the organization may countervail a lower departmental performance and vice versa (see Eq. 3). Hence, a unit head finds that the trade-off incorporated in explorative search between the peril of compensation declines and the chance of performance gains shifts in favor of the latter. This is reflected in the prevalence of explorative behavior (Fig. 2d). In the other two scenarios D/ND and O/D, the task environment and incentive scheme do not “correspond” to each other (“misfit”). Here those search strategies emerge most often, which induce the lowest level of alterations of the organization’s overall configuration d (Table 2): In particular, in the D/ND scenario, the incentive scheme induces myopia, where—from the organizational and a unit manager’s perspective—a broad perspective is required: Hence, when evaluating an alternative, manager r does not consider its effects on the rest of the organization and only seeks to increase the own performance P r,own (see Eq. 3). Due to interactions, whenever manager r opts for an alternative dr this likely also affects the “own” performance P q,own of each fellow manager q = r . In the next period, each manager q likely will adjust her/his dq to manager r ’s choice; this will “backfire” the partial performance of manager r who will then adjust and so forth. Hence, each myopic movement may induce fellows managers’ reactions and, thus, frequent mutual adjustments occur. The managers’ selection of search behavior shows some asymmetry regarding positive and negative effects: those strategies prone to performance losses—though accompanied by the chance of performance gains—become less and less likely to be selected. Search strategies that induce small steps and, hence, less “backfiring” lead to less negative experiences on the units’ side and prevail, which broadly corresponds to prior research [24]. Accordingly, in Fig. 2b clusters with exploitative search show high frequencies normalized to cluster-size. In the O/D scenario, the incentive scheme induces that unit heads consider organizational performance in decision-making. However, since there are no cross-unit interactions, this is not only superfluous. Even worse, due to managers’ imperfect understanding of the overall decision problem, managers may consider “phantom” interactions. Hence, motivated by the incentive system, they seek to affect overall performance with only local means while wrongly perceiving they can affect overall performance via (counterfactual) cross-unit interactions to which fellow managers mutually react. This results in a high ratio of alterations of the overall configuration dr (see Table 2). Due to units’ imperfect understanding of the overall decision problem, changes of the status quo are relatively likely to lead astray from the local maxima, which, in the decomposable structure, in sum constitute the overall maximum of performance that in the O/D scenario is rewarded. A search strategy that is most likely
116
F. Wall
to end up with no worthwhile changes of the status quo and, hence, “preventing” alterations causes less negative experiences of units managers. As argued above, this is the case with longer jumps (exploration). Hence, explorative search emerges more often as it is more likely to prevent that the status quo is abandoned.
5 Conclusion In the very core of the research effort presented in this paper is to study the overall patterns of search emerging for different incentive schemes in conjunction with task complexity. The results of the simulation experiments suggest that the emergence of search behavior arises for different “reasons”, i.e., fostering, for moderating or even preventing that the status quo is abandoned in favor of new solutions—subtly shaped by the (mis-)fit of incentive scheme and task environment. The findings may shed some new light on search behavior in organizations. They stress, first, the idea of structure-environment fit as emphasized in the contingency theory of organizations. Second, findings suggest that, in situations of “misfit”, novelty-affine search behavior may emerge to reduce alterations which runs contrary to what intuition may suggest. However, the study presented in this paper calls for further research efforts. As such, a further step is the model’s validation. For this, first, the model could be further developed to closer match studies of incentive systems and innovation—as partially referred to in the Introduction. This could include introducing costs of search, other modes of learning and coordination, and further contingencies (e.g., turbulence). Second, (behavioral) laboratory experiments could be employed for validating the model. In particular, there is an emerging stream of experimental research building on the concept of fitness landscapes and, thus, capturing interactions. Transferring the 2 × 2 design of this simulation study into the treatments of a corresponding laboratory experiment could be a promising endeavor toward the model’s validation.
References 1. Lawrence, P.R., Lorsch, J.W.: Differentiation and integration in complex organizations. Admin. Sci. Q. 12(1), 1–47 (1967) 2. March, J.G.: Exploration and exploitation in organizational learning. Organ. Sci. 2(1), 71–87 (1991) 3. Bushman, R.M., Indjejikian, R.J., Smith, A.: Aggregate performance measures in business unit manager compensation: The role of intrafirm interdependencies. J. Account. Res. 33(Sup.), 101–129 (1995) 4. Siggelkow, N., Rivkin, J.W.: Speed and search: Designing organizations for turbulence and complexity. Organ. Sci. 16, 101–122 (2005) 5. Wall, F.: Learning to incentivize in different modes of coordination. Adv. Complex Syst. 20(2– 3), 1–29 (2017) 6. Manso, G.: Motivating innovation. J. Financ. 66, 1823–1860 (2011)
Pay-for-Performance and Emerging Search Behavior …
117
7. Lerner, J., Wulf, J.: Innovation and incentives: evidence from corporate R&D. Rev. Econ. Stat. 89, 634–644 (2007) 8. Davila, A., Foster, G., Li, M.: Reasons for management control systems adoption: Insights from product development systems choice by early-stage entrepreneurial companies. Account. Org. Soc. 34, 322–347 (2009) 9. Chenhall, R.H.: Developing an organizational perspective to management accounting. J. Manag. Account. Res. 24, 65–76 (2012) 10. Holmström, B.: Moral hazard and observability. Bell J. Econ. 10, 74–91 (1979) 11. Leitner, S., Wall, F.: Decision-facilitating information in hidden-action setups: an agent-based approach. J. Econ. Interact. Coor. 16, 323–358 (2021) 12. Chen, S.-H.: Varieties of agents in agent-based computational economics: a historical and an interdisciplinary perspective. J. Econ. Dyn. Control 36, 1–25 (2012) 13. Kauffman, S.A., Levin, S.: Towards a general theory of adaptive walks on rugged landscapes. J. Theor. Biol. 128, 11–45 (1987) 14. Kauffman, S.A.: The Origins of Order: Self-Organization and Selection in Evolution. Oxford University Press, Oxford (1993) 15. Baumann, O., Schmidt, J., Stieglitz, N.: Effective search in rugged performance landscapes: a review and outlook. J. Manag. 45, 285–318 (2019) 16. Wall, F.: Agent-based modeling in managerial science: an illustrative survey and study. Rev. Manag. Sci. 10, 135–193 (2016) 17. Simon, H.A.: A behavioral model of rational choice. Q. J. Econ. 69, 99–118 (1955) 18. Simon, H.A.: Theories of decision-making in economics and behavioral science. Am. Econ. Rev. 49, 253–283 (1959) 19. Levitan, B., Kauffman, S.A.: Adaptive walks with noisy fitness measurements. Mol. Divers. 1, 53–68 (1995) 20. Brenner, T.: Agent learning representation: advice on modelling economic learning. In: Tesfatsion, L., Judd, K.L. (eds.) Handbook of Computational Economics, vol. 2, pp. 895–947. Elsevier, Amsterdam (2006) 21. Lorscheid, I., Heine, B.-O., Meyer, M.: Opening the ‘black box’ of simulations: increased transparency and effective communication through the systematic design of experiments. Comput. Math. Organ. Th. 18, 22–62 (2012) 22. Rivkin, J.W., Siggelkow, N.: Patterned interactions in complex systems: implications for exploration. Manag. Sci. 53, 1068–1085 (2007) 23. Redman, T.C.: The impact of poor data quality on the typical enterprise. Commun. ACM 41, 79–82 (1998) 24. Siggelkow, N., Rivkin, J.W.: When exploration backfires: unintended consequences of multilevel organizational search. Acad. Manag. J. 49, 779–795 (2006)
Effects of Limited and Heterogeneous Memory in Hidden-Action Situations Patrick Reinwald, Stephan Leitner, and Friederike Wall
Abstract Limited memory of decision-makers is often neglected in economic models, although it is reasonable to assume that it significantly influences the models’ outcomes. The hidden-action model introduced by Holmström also includes this assumption. In delegation relationships between a principal and an agent, this model provides the optimal sharing rule for the outcome that optimizes both parties’ utilities. This paper introduces an agent-based model of the hidden-action problem that includes limitations in the cognitive capacity of contracting parties. Our analysis mainly focuses on the sensitivity of the principal’s and the agent’s utilities to the relaxed assumptions. The results indicate that the agent’s utility drops with limitations in the principal’s cognitive capacity. Also, we find that the agent’s cognitive capacity limitations affect neither his nor the principal’s utility. Thus, the agent bears all adverse effects resulting from limitations in cognitive capacity. Keywords Agentization · Principal-agent theory · Limited intelligence · Agent-based modeling and simulation · Mechanism design
1 Introduction Over the past decades, the mathematical models and methods used in Economics developed steadily, which finally also led to increasing intelligence of agents and, therefore, to many different implementations of intelligence. Closely related to this large variety of concepts to express intelligence is the concept of (individual) rationality [10, 23]. This concept is concerned with the behavioural aspects of agents. P. Reinwald (B) · S. Leitner · F. Wall Department of Management Control and Strategic Management, University of Klagenfurt, 9020 Klagenfurt, Austria e-mail: [email protected] S. Leitner e-mail: [email protected] F. Wall e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_10
119
120
P. Reinwald et al.
It captures how agents should ideally behave to achieve their objectives, taking into consideration certain (and case-specific) constraints and conditions [10, 22]. An important component of rational decision making is that agents can deal with the complexity of decision problems. This, amongst others, presupposes that agents know how to deal with risk and uncertainty, have access to all relevant information, can identify information as being relevant, and can reason based on the available information. The cognitive capacity required to do so can, at least to some extent, be measured and is usually entitled as general intelligence or intelligence quotient (IQ) [8, 17, 23, 24]. It is well-known that human cognitive abilities are limited, which is also reflected in certain economic models. Taking limited cognitive capacities into account requires a shift away from the assumptions related to perfectly rational decision makers, which opens the field for various concepts of bounded rationality. However, capturing more natural cognitive abilities in models often requires a change in the modeling (or more generally research) approach, which frequently results in a move from traditional mathematical methods to simulation-based approaches. This is particularly required since research problems that include decision makers with bounded rationality, due to their innate complexity, usually can not be solved using mathematical methods (i.e., optimization), but often requires simulative approaches [3, 4, 7, 11, 14, 23]. We take up on this stream of research and limit the cognitive abilities of the principal and the agent in the hidden-action model introduced by Holmström [12] by bounding their cognitive capacity (memory).1 In the vein of Leitner and Wall [15], we transfer the hidden-action model into an agent-based model variant using the approach introduced by Guerrero and Axtell [11], and Leitner and Behrens [14], which enables us to relax selected assumptions to implement the more natural concept of human cognitive ability. The remainder of this paper is organized as follows: Sect. 2 elaborates on the hidden-action model originally introduced in Holmström [12]. In Sect. 3, we present the agent-based model variant, where we also explain the adjustments to the assumptions necessitated by the changes. Section 4 explain the simulation setup, and introduces and discusses the results.
2 The Hidden-Action Model The hidden-action model, which was amongst others introduced by Holmström [12], is a single-period model that describes a situation in which a principal assigns a task to an agent. For this:
1
For further information on the hidden-action model and the principal-agent theory, please see [9, 13, 21].
Effects of Limited and Heterogeneous Memory in Hidden-Action Situations
121
• The principal designs a contract upon the task to carry out, also including an incentive scheme, i.e., a sharing rule that fixes how the task’s outcome will be shared between the principal and the agent. • The principal offers the contract to the agent. • The agent decides whether to accept the contract or not. If the agent agrees on the conditions stated in the contract, he decides how much effort (also referred to as action) he wants to make in fulfilling the specified task. • The agent’s chosen effort level generates, together with an exogenous factor (also referred to as environmental factor), the outcome of the task. • Carrying out the task leads to disutility for the agent, depending on his effort level. • The principal is not able to observe the action of the agent, so it is hidden. This results in the situation that only the outcome serves as basis for the incentive scheme. • The model assumes that the principal is risk-neutral and the agent is risk-averse. Both are individual utility maximizers. The utility function U P (x, s) = x − s(x) characterizes the principal, whereby x denotes the generated outcome, and s = s(x) is the function of the sharing rule. The outcome is calculated as x = f (a, θ ), which is a function of the agent’s chosen effort level a and the exogenous factor θ . The agent is characterized by the utility function U A (s, a) = V (s(x)) − G(a), where V (s) represents the utility generated from compensation and G(a) is the function for the disutility generated from exerting effort. The optimization problem to generate the optimal solution is formulated as follows: max E(U P (x − s(x)))
(1)
s.t E{U A (s(x), a)} ≥ U¯
(2)
s(x),a
a ∈ arg max E{U A (s(x), a )} , a ∈A
(3)
where Eqs. (2) and (3) are constraints that have to be considered by the principal. The notation “arg max” represents the set of all arguments that maximizes the objective function that follows. Equation 2 is referred to as the participation constraint, which ensures that the agent accepts the offered contract by ensuring him the minimum utility U¯ . This minimum utility is also referred to as reservation utility and represents the agent’s best outside option. Equation 3 is known as the incentive compatibility constraint and aligns the agent’s objective (maximize his utility) with the principal’s objective. This constraint affects the agent’s choice of effort level a [5, 9, 13, 15, 20]. In Table 1, the notation used in this section is summarized.
122
P. Reinwald et al.
Table 1 Notation for the hidden-action model Description Principal’s utility function Agent’s utility function Agent’s utility from compensation Agent’s disutility from effort Agent’s reservation utility Agent’s share of outcome Outcome Premium parameter Effort level Set of all feasible actions Random state of nature
Parameter U P (x − s(x)) U A (s(x), a) V (s(x)) G(a) U¯ s(x) = x ∗ p x = x(a, θ) p a A θ
3 Agent-Based Model Variant The hidden-action model has a set of (rather restrictive) assumptions incorporated in order to make it mathematically tractable [4]. Overviews of the most important assumptions are provided by Axtell [4] and Müller [19]. For this paper, we relax the assumptions of information symmetry regarding the environmental factor as follows: • The principal and the agent no longer can access all information about the exogenous factor at the beginning of the sequence of events. They only know that it follows a Normal Distribution. • The principal and the agent are endowed with learning capabilities so that they are both able to individually learn about the exogenous factor over time. We refer to this learning model as simultaneous and sequential learning. • The principal and the agent have a defined memory to individually store their gathered information about the exogenous factor. As a consequence of these adaptions, the principal and the agent are no longer able to find the optimal solution immediately, which in exchange enables the introduction of a learning mechanism. Furthermore, the adaptation of the assumptions and the resulting change in the research methodology leads to the switch from a single-period model to a multi-period model. The timesteps of the multi-period model are indicated by t = 1, . . . , T . An overview of the sequence of events is provided in Fig. 1, and the notation of the model is summarized in Table 2. The risk-neutral principal is characterized by the utility function U P (xt , s(xt )) = xt − s(xt ),
(4)
where xt = at + θt represents the outcome and s(xt ) = xt ∗ pt denotes the agent’s compensation in timestep t, at is the agent’s action, θt stands for normally distributed
Effects of Limited and Heterogeneous Memory in Hidden-Action Situations
123
Fig. 1 Flow diagram
exogenous factor (θt ˜ N (μ, σ )), and pt ∈ [0, 1] stands for the premium parameter of the sharing-rule which is used to compute the agent’s compensation in t. The riskaverse agent is characterized by the utility function V (s(xt ))
G(at )
a2 1 − e−η∗s(xt ) − t , U A (s(xt ), at ) = η 2
(5)
where η represents the agent’s Arrow-Pratt measure of risk-aversion [2]. As in Holmström’s hidden-action model, the agent’s utility function composes of two components, V (s(xt )) represents the utility generated from compensation s(xt ) in timestep t and G(at ) is the function for the disutility from generating effort at in timestep t. The relaxation of the assumption mentioned above requires to set boundaries for the space of effort-levels A for every t, so that the existence of solutions is assured. This is necessary since the set of all feasible solutions, At (At ⊂ A), strongly depends on the expectation of the environment and, therefore, can change for every timestep t. According to Holmström [12], we identify the lower boundary by the participation constraint and the upper boundary by the incentive compatibility constraint. It is important to notice that both boundaries include an expectation about the exogenous factor θ (see Eqs. 6 and 8) and, therefore, the value of the boundaries might change in every t. In every timestep t = 2, . . . , T , the principal randomly discovers 2 alternative effort-levels in the search space At , which together with the action incited in the previous period, a˜ t , serve as candidates for the action in the next period, a˜ t+1 , and
124
P. Reinwald et al.
Table 2 Notation for the agent-based model variant Description Parameter Timesteps Principal’s utility Agent’s utility Agent’s Arrow-Pratt measure of risk-aversion Agent’s share of outcome in t Outcome in t Principal’s expected outcome Premium parameter in t Agent’s chosen effort level in t Induced effort level by the principal in t Set of all actions Set of all feasible actions in t Exogenous (environment) variable in t Principal’s estimation of the realized exogenous factor in t Principal’s memory in periods Agent’s memory in periods Averaged expected exogenous factor of the principal Averaged expected exogenous factor of the agent
t UP UA η s(xt ) = xt ∗ pt x t = a t + θt x˜ Pt pt at a˜ t A At θt θ˜t mP mA θˆPt θˆAt
evaluates them with respect to increases in expected utility (based on the utility function in Eq. 4). The discovered effort-levels are modeled to be uniformly distributed in the search space. Please notice that the evaluation of the effort-levels includes the principal’s expectation about the exogenous factor in t. The principal, as well as the agent, are able to remember previous observations or estimations (cf. Eqs. 6 and 8). The retrievable values (defined by the memory m P and m A for the principal and the agent, respectively) are subsequently used to compute the expectation of the exogenous factor. For the principal this expectations is computed as follows:
θˆPt =
⎧ 1 ⎪ ⎪ ⎪ ⎨ t−1 1 ⎪ ⎪ ⎪ ⎩ mP
n=t−1
θ˜n n=1 n=t−1
∀t≤m P :n=1 ∀t>m P :n=t−m P
if m P = ∞ , θ˜n if m P < ∞ ,
(6)
The expected outcome (using the value-maximizing effort-level from the principal’s point of view, a˜ t ) in t, thus, be formalized by x˜ Pt = a˜ t + θˆPt . Next,the principal computes the corresponding premium parameter according to
Effects of Limited and Heterogeneous Memory in Hidden-Action Situations
pt = max U P (x˜ Pt , s(x˜ Pt )) . p=[0,1]
125
(7)
The principal offers the contract (which, amongst others, includes pt ) to the agent who decides whether to accept the contract or not. In case the agent accepts the contract, he selects an effort level at = maxa∈At U A (s(xt ), a), where At represents the space of all feasible effort-levels. As the agent is able to observe the realization of exogenous factors which represents environmental uncertainty, the expectation of the exogenous factor in t from the agent’s point of view is computed as follows:
θˆAt =
⎧ 1 ⎪ ⎪ ⎪ ⎨ t−1 1 ⎪ ⎪ ⎪ ⎩ mA
n=t−1
θn n=1 n=t−1
∀t≤m A :n=1 ∀t>m A :n=t−m A
if m A = ∞ , θn if m A < ∞ ,
(8)
Next, the exogenous factor, θt , and the outcome xt , realize. The agent can observe xt and θt and memorizes θt . The principal can only observe xt , estimates the exogenous factor according to θ˜t = xt − a˜ t , (9) and memorizes θ˜t .2 Finally, the utilities for the principal and the agent realize, and a˜ t is carried over to period t + 1 as status-quo effort level. This sequence is repeated T times.
4 Results 4.1 Parameterization For this paper, the analysis puts special emphasis on the principal’s and the agent’s memory (cognitive capacity), and the distribution of the environment on the utility of both, the principal and the agent. All other parameters are kept constant during the simulation runs. The parameterization is summarized in Table 3. The parameterization allows distinguishing two levels of cognitive capacity by the variable m P for the principal, and m A for the agent. They have either limited cognitive capacity with a memory of 3 periods or unlimited cognitive capacity with a memory of T − 1 periods, whereby all four combinations of cognitive capacity are simulated. The environment in which an organization operates can be characterized by σ , whereby an organization can either act in relatively stable (σ = 0.05x ∗ ), 2
Please notice that as long as the principal and the agent have the same expectation regarding the environment (for situations where m A = m P ), a˜ t and at perfectly coincide, which means that for both, only one piece of information is unavailable, and, thus, the principal can estimate the realization of the exogenous factor without error.
126 Table 3 Key parameters Parameter Principal’s memory Agent’s memory Exogenous factor: standard deviation Exogenous factor: mean Agent’s Arrow-Pratt measure
P. Reinwald et al.
Notation
Values
mP mA σ
{3, ∞} {3, ∞} {0.05x ∗ , 0.45x ∗ }
μ η
0 0.5
moderately stable (σ = 0.25x ∗ ), or unstable (σ = 0.45x ∗ ) environment. The characterization of both parties (the principal and the agent) and organizations leads to a total number of 12 scenarios.
4.2 Simulations and Reported Performance Measure In total, there are 12 investigated scenarios (as described in Sect. 4.1). For each scenario we simulated R = 700 paths3 and in each path we simulated T = 20 timesteps.4 In each timestep the principal is able to adapt the parameterization of the incentive scheme, and the agent is able to change the effort he makes.
4.3 Results and Discussion The results for the agent’s utility are shown in Fig. 2, while Fig. 3 displays the results for the principal’s utility. Each figure consists of four subplots, representing different combinations of memory (cognitive capacity). In every subplot three lines are plotted, each representing a different environmental uncertainty (represented through different standard deviations σ = {0.05x ∗ , 0.25x ∗ , 0.45x ∗ }). For the agent’s utility we can see in all scenarios, that a higher extent in environmental uncertainty significantly decreases his utility. This may be a result of the overall lower outcome of the task, which was already shown in Reinwald et al. [20], and is in line with the existing literature on environmental uncertainty [6, 18] and environmental dynamism [1]. These papers suggests that more uncertain and dynamic environments increase the difficulty of organizational decision-making and, consequently, is a significant determinant of organizational performance. 3
This number of paths appears to be appropriate as a relatively stable coefficient of variation can be observed (cf. [16]). 4 The simulation is limited to 20 timesteps since we observed that the dynamics of the model do not change afterwards.
Effects of Limited and Heterogeneous Memory in Hidden-Action Situations
127
Fig. 2 Agent’s utility for all scenarios. Shaded areas represent confidence intervals at the 99%-level
By comparing the different subplots in Fig. 2, we can see that the cognitive capacity of the principal (memory m P ) has a significant influence on the agent’s utility in a way that an increase in the principal’s memory also leads to an increase in utility for the agent for all levels of environmental uncertainty. This leads to the suggestion that an increase in cognitive capacity of the principal reduces the uncertainty of the environment for her, and, therefore, results in a better contract, which increases the agent’s utility. Regarding the agent’s cognitive capacity (memory m A ), no effects can be observed. For the principal’s utility, we can not see any significant differences, neither by comparing subplots (different cognitive capacities) nor by comparing the plotted lines within a subplot (different environmental situations). This leads to the conclusion that neither the environmental uncertainty nor changes in the cognitive capacity influence the principal’s utility. When compared to the results presented in Reinwald et al. [20] regarding the task’s performance, where we can see that environmental uncertainty has a negative effect on performance and the principal’s cognitive capacity has a positive effect, two conclusions arise. The first is, that the risk of environmental uncertainty is solely carried by the agent, and the second is, that the principal has no intention to improve her cognitive capacity.
128
P. Reinwald et al.
Fig. 3 Principal’s utility for all scenarios. Shaded areas represent confidence intervals at the 99%level
5 Conclusion We transferred the hidden-action model of Holmström into an agent-based variant, and we relaxed the assumption of information symmetry about the environmental factor. We find that the agent’s utility is highly affected by both the level of environmental uncertainty and the cognitive capacity of the principal. In contrast, it was observed that the cognitive capacity, in terms of memory, has no effect, neither on the utility of the agent nor on the utility of the principal. Further, the utility of the principal was not affected by any change of the scenarios. Our research has certain limitations which call for further research efforts. First, some assumptions incorporated in the hidden-action model are carried over. These assumptions cover, among others, the principal’s knowledge of the agent’s characteristics, the individual utility-maximizing behaviour, or the ability to perform all actions without errors or biases. Second, there is only one kind of contract (incentive scheme) available. Third, there are no learning-curve effects, although the agent repeatedly carries out the same task. Fourth, some design decisions of the model are made by the modeler (e.g. learning mechanism used). Further research might want to deeper investigate the effects shown in this paper and additionally relax further assumptions incorporated in the hidden-action model. .
Effects of Limited and Heterogeneous Memory in Hidden-Action Situations
129
Acknowledgements This work was supported by funds of the Oesterreichische Nationalbank (Austrian Central Bank, Anniversary Fund, project number: 17930).
References 1. Aldrich, H.: Organizations and Environments. Stanford University Press (2008) 2. Arrow, K.J.: The role of securities in the optimal allocation of risk-bearing. In: Readings in Welfare Economics, pp. 258–263. Springer, Berlin (1973) 3. Arthur, W.B.: Designing economic agents that act like human agents: a behavioral approach to bounded rationality. Am. Econ. Rev. 81(2), 353–359 (1991) 4. Axtell, R.L.: What economic agents do: how cognition and interaction lead to emergence and complexity. Rev. Austrian Econ. 20(2–3), 105–122 (2007) 5. Caillaud, B., Hermalin, B.E.: Hidden Action and Incentives (2000) 6. Chen, J., Reilly, R.R., Lynn, G.S.: The impacts of speed-to-market on new product success: the moderating effects of uncertainty. IEEE Trans. Eng. Manag. 52(2), 199–212 (2005) 7. Chen, S.H.: Varieties of agents in agent-based computational economics: a historical and an interdisciplinary perspective. J. Econ. Dyn. Control 36(1), 1–25 (2012) 8. Chen, S.H., Du, Y.R., Yang, L.X.: Cognitive capacity and cognitive hierarchy: a study based on beauty contest experiments. J. Econ. Interact. Coord. 9(1), 69–105 (2014) 9. Eisenhardt, K.M.: Agency theory: an assessment and review. Acad. Manag. Rev. 14(1), 57–74 (1989) 10. Godelier, M.: Rationality and irrationality in economics. Verso Trade (2013) 11. Guerrero, O.A., Axtell, R.L.: Using agentization for exploring firm and labor dynamics. In: Osinga, S., Hofstede, G.J., Verwaart, T. (eds.) Emergent Results of Artificial Economics, pp. 139–150. Springer, Berlin (2011) 12. Holmstrom, B.: Moral hazard and observability. Bell J. Econ. 10(1), 74–91 (1979) 13. Lambert, R.A.: Contracting theory and accounting. J. Account. Econ. 32(1–3), 3–87 (2001) 14. Leitner, S., Behrens, D.A.: On the efficiency of hurdle rate-based coordination mechanisms. Math. Comput. Model. Dyn. Syst. 21(5), 413–431 (2015) 15. Leitner, S., Wall, F.: Decision-facilitating information in hidden-action setups: an agent-based approach. J. Econ. Interact. Coord. 1–36 (2020) 16. Lorscheid, I., Heine, B.O., Meyer, M.: Opening the ‘black box’of simulations: increased transparency and effective communication through the systematic design of experiments. Comput. Math. Organ. Theory 18(1), 22–62 (2012) 17. McGrew, K.S.: CHC theory and the human cognitive abilities project: standing on the shoulders of the giants of psychometric intelligence research. Intelligence 37(1), 1–10 (2009) 18. Milliken, F.J.: Three types of perceived uncertainty about the environment: state, effect, and response uncertainty. Acad. Manag. Rev. 12(1), 133–143 (1987) 19. Mueller, C.: Agency-Theorie und Informationsgehalt. Der Beitrag des normativen PrinzipalAgenten-Ansatzes zum Erkenntnisfortschritt der Betriebswirtschaftslehre. Die Betriebswirtschaft 55, 61–76 (1995) 20. Reinwald, P., Leitner, S., Wall, F.: An Agent-Based Model of Delegation Relationships With Hidden-Action: On the Effects of Heterogeneous Memory on Performance. Simul 2020: The twelfth International Conference on Advances in System Simulation, pp. 42–47 (2020) 21. Shapiro, S.P.: Agency theory. Annu. Rev. Sociol. 31, 263–284 (2005) 22. Simon, H.A.: Theories of bounded rationality. Decis. Organ. 1(1), 161–176 (1972) 23. Thaler, R.H.: From homo economicus to homo sapiens. J. Econ. Perspect. 14(1), 133–141 (2000) 24. Wall, F., Leitner, S.: Agent-based computational economics in management accounting research: Opportunities and difficulties. J. Manag. Account. Res. (2020)
Autonomous Group Formation of Heterogeneous Agents in Complex Task Environments Darío Blanco-Fernández, Stephan Leitner, and Alexandra Rausch
Abstract Individuals cannot solve complex tasks by themselves due to their limited capabilities. By self-organizing into groups, individuals with different capabilities can overcome their limitations. Individuals and groups often change over time: The individuals that form the group learn new ways to solve the task, while groups adapt their composition in response to the current needs of the task. The latter is driven by the differing characteristics of the individuals, as some of them might be better adapted at a particular point in time but do not participate in the group. By self-organizing, groups absorb these individuals within their ranks, so they have the best-adapted members. However, there is a lack of consensus on whether changing a group’s composition over time is beneficial or detrimental to task performance. Moreover, previous research has often assumed that agents are homogeneous. We implement an adaptation of the NK-framework using agents with heterogeneous capabilities, which includes an individual learning mechanism and a second-price auction mechanism for group self-organization. Heterogeneity in the agents’ capabilities ensures that groups have an incentive to change their composition over time. Our results suggest that group self-organization can improve task performance depending on task complexity and how prominent is individual learning. Keywords Heterogeneous agents · Group self-organization · Complex task
D. Blanco-Fernández (B) · S. Leitner · A. Rausch University of Klagenfurt, 9020 Klagenfurt, Austria e-mail: [email protected] S. Leitner e-mail: [email protected] A. Rausch e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_11
131
132
D. Blanco-Fernández et al.
1 Introduction Complex tasks pose a substantial challenge on individuals, as it is doubtful that a single human can solve an entire complex task alone. This occurs as individuals usually have limited capabilities with respect to this kind of task [4]. An complex task is, for example, running a farming cooperative: Coordination and cooperation between various members to overcome individual limitations (such as financial constraints, the lack of expertise in different areas, or time limitations) lie at the core of cooperatives [7]. Individuals can overcome the challenges that arise from limited capabilities by self-organizing into groups [17]. Following the example, a farming cooperative can be described as a self-organized collective of different kinds of farmers such as dairy farmers, cattle ranchers, wheat farmers. Each of the farmers’ areas of expertise corresponds to a different subtask that, together, form the complex task in itself [5] (i.e., running an agricultural cooperative). Furthermore, these multiple subtasks are often interdependent [5]. Self-organized groups are subject to a process of adaptation to the complex task they face. In particular, successful adaptation has been commonly associated with solving complex tasks in a more efficient manner [3]. Adaptation has been often described as a multi-level process by which both the group and the individuals that form the group simultaneously adapt their capabilities to the complex task [3, 14]. Self-organized groups often collectively adapt to the complex task by changing their composition over time [2]. Furthermore, the individuals that form the group individually adapt by learning about the task and changing their knowledge as a result [2]. Following the example of the farming cooperative, the group of farmers would adapt by replacing some (or all) of the members over time; and, at the same time, the different farmers go through a process of learning (e.g., better farming techniques, more efficient allocation of resources). Given how relevant the adaptation process is for task performance [3, 14] it is crucial to understand how adaptation has been modeled in previous research on complex task solving, and what this means for research in the field. Previous literature has implemented adaptation as a process of sequential search for new solutions to a complex task. Concerning individual adaptation, agents in these approaches usually adapt by discovering solutions to the complex task or a part of it over time [5, 10, 20]. A similar learning mechanism has often been implemented for collective adaptation, by which an entire group discovers solutions to the complex task over time with an exogenous probability [11, 12]. In Managerial Science, these approaches usually assume that agents are representative, i.e., that the agents of the model are homogeneous in their capabilities [1]. This assumption allows researchers to model groups merely as the aggregation of individual agents that behave in the same way. This, in turn, implies that group behavior just mirrors individual behavior [1]. Approaches in which group adaptation is understood as a process that mirrors individual learning, however, usually do no consider the effects of individual-level phenomena at the collective level [1]. Moreover, groups have no incentive to look for new members and change their composition due to agents being identical. This means
Autonomous Group Formation of Heterogeneous Agents …
133
that the study of the dynamics that might emerge from recurrent self-organization have not been extensively considered in previous studies yet [2]. Our research, by contrast, takes into account that the members of a group might be heterogeneous in their capabilities. By doing so, we are able to investigate group self-organization as a collective adaptation mechanism. To understand better the role of adaptation on task performance in groups, we implement a multilevel adaptation mechanism that links the strengths of each approach previously outlined to the insights given by [2] about group dynamism via self-organization and individual learning. For this purpose, we relax the assumption of agent homogeneity [1]. By allowing agents to be heterogeneous in their capabilities, we can implement a collective adaption mechanism based on group self-organization. Since agents differ in their capabilities, groups have an incentive to look for agents located outside the group who are prepared to solve the task more efficiently, to compare them to the current agents of the group, and to change the group’s composition. Taking into account agent heterogeneity, some researchers in fields outside Managerial Science have employed approaches that consider selforganization as a collective adaptation mechanism. For example, [6] consider multilevel adaptation using individual learning and a mobility-based mechanism for group organization by which agents approach other agents and form groups in the process. However, there is a lack of consensus on whether this latter aspect is beneficial for performance or not. Research in Physics and Game Theory, for example, has found that group self-organization is positive for task performance in complex task environments [6]. In contrast, research in Managerial Science often states that stable group composition is a positive factor for task performance [8]. Consequently, researchers of this latter field who study complex task solving often assume stability in group composition, even if the modeled agents are assumed to be heterogeneous [5, 10, 16, 20].1 Given that the role of recurrent group self-organization has been often overlooked in Managerial Science, we focus on this aspect and intend to find a solution to whether recurrent group self-organization can be identified as being positive or negative for task performance. To accomplish this goal we employ an agent-based model based on the NKframework for Managerial decision-making [9, 11, 21] which has been extended to allow for group self-organization via a second-price auction mechanism. Following [9, 21], an agent-based approach has been chosen, as it allows for modeling a large set of heterogeneous agents that solve complex tasks in groups. The model is described in Sect. 2 and selected results of the model are found in Sect. 3. Finally, Sect. 4 concludes the paper.
1
For an overview on how heterogeneity and task performance are related, see [8, 13, 15].
134
D. Blanco-Fernández et al.
2 Model 2.1 Task Environment The implemented model is based on the NK-framework [11], adapted to account for group self-organization and individual learning. The complex task is modeled as a vector of N ∈ N binary decisions, denoted by d = (d1 , . . . , d N ). Each of these decisions dn is associated with a particular contribution cn ∼ U (0, 1) to the overall task performance. However, this contribution cn does not only depend on the associated decision dn , but also on K ∈ N0 other decisions. It is important to account for the interdependencies between the different decisions of the task since it is one of the key aspects of complex tasks [5]. The contribution of each decision to performance is denoted by: (1) cn = f (dn , di1 , . . . , di K ) , K interdependencies
where {i 1 , . . . , i K } ⊆ {1, . . . , , n − 1, n + 1, . . . , N }. K represents the complexity of the overall task, and is restricted to the interval 0 ≤ K ≤ N − 1. The overall performance of the task at time t is the average of each decision’s contribution: (dt ) =
N 1 cnt . N n=1
(2)
Each solution to the task is thus represented by a particular vector of binary values of length N = 12. Performances are randomly generated for each of the possible 2 N solutions and follow (dt ) ∼ U (0, 1). This mapping of each solution related to a particular performance is called the performance landscape. Depending on the number of decisions N , the complexity of the task K , and which decisions are interdependent with each other, the generated performance landscape has more or less suboptimal solutions.2 Figure 1 shows how the interdependencies are structured in the model. To solve a complex task, collaboration in the form of a group is required. To account for the limitation of the agents’ capabilities, we have divided the complex task d = (d1 , . . . , d N ) into M = 3 subtasks of equal length S ∈ N = N /M = 4. Subtasks are denoted by dm = (d S·(m−1)+1 , . . . , d S·m ). Each member of the group has the capabilities to solve one of the subtasks and, taken together, the members of the group are able to solve the complex task.
2
See [11] for a detailed description of the performance landscapes and their characteristics.
Autonomous Group Formation of Heterogeneous Agents …
135
Fig. 1 Stylized interdependence structures
2.2 Agents We model a set of P = 30 agents with limited capabilities. Agents are not only limited in the sense that their capabilities are restricted to a particular subtask but also in their cognitive capabilities. As result, they cannot consider the complete set of solutions to the subtask at the same time. By forming a group and learning and forgetting solutions to the subtask over time (see Sect. 2.3), agents overcome these limitations and solve the complex task. Agents are utility maximizers and myopic, so their objective is to maximize utility in the incoming time step. The utility of an agent3 assigned to subtask m at time t is divided into two weighted parts: One part is the contribution achieved by the agent in their assigned subtask dm (represented in Eq. 3). The other part is the contribution to overall performance of the decisions that are outside subtask m (referred to as residual decisions) d¬m = (d1 , . . . , dr ), where r = {1, . . . , M} and r = m. φ(dmt ) =
1 cnt S d ∈d nt
(3)
mt
The utility function of an agent assigned to subtask m is then: U (dmt , d¬mt ) = α · φ(dmt ) + β · φ(d¬mt )
(4)
where α, β ∈ R are the weights for agent m’s and residual contributions, respectively.4
3 4
Agents that are not in the group have an utility at time t equal to 0. It follows that and α + β = 1.
136
D. Blanco-Fernández et al.
2.3 Individual Learning As outlined in Sect. 2.2, the agents’ limited capabilities make the agents’ knowledge about the solutions to the subtask limited to a subset Q that follows |Q| < 2 S . Initially, we set |Q| = 1 for each agent, and the initial solution is assigned at random among the 2 S possibilities. Moreover, each agent’s subset Q changes over time following the process of adaptation to subtask dm . This adaptation process occurs independently for each agent and with a probability p determined by the modeler. With probability p, agents learn one random solution to their subtask dm . This solution differs in only one value dn from any of the already known solutions in Q. With the same value of the probability p, agents can forget a random solution in Q that is not utilitymaximizing at time t. This individual adaptation process can be characterized as autonomous learning (as found, for example, in [10]). We have adapted it to allow for forgetting, following the dynamic capabilities framework [18]. This makes our approach to individual learning consistent with the fact that the agents’ cognitive capabilities are limited. Agents are heterogeneous concerning their assigned subtask (as described in Sect. 2.2) and the solutions they know to the task. Moreover, they can differ in the initial solution assigned to them since it is randomized across each of the 2 S available solutions. Furthermore, the fact that the individual learning process occurs independently for each agent implies that agents can significantly differ in the set of solutions Q they know. This heterogeneity in the agents’ capabilities has implications for selforganization.
2.4 Group Self-organization The population of P = 30 agents is distributed at random across the M = 3 subtasks with a uniform distribution. This implies that there are J = 10 agents assigned to each subtask. We assume that only one agent is needed per subtask to solve the complex task in its entirety. Consequently, M = 3 agents form one group to solve one complex task. Since agents are heterogeneous, the mechanism implemented for group formation needs to account for the potential differences between agents. As a result, agents who are better-adapted to the subtasks are selected for the group. To form the group, the P = 30 agents set up together a second-price auction. Agents participate in an auction assigned to their subtask and bid their intended contributions to group performance. In return, they are expected to achieve a contribution of at least the second-highest bid. Since agents can only obtain utility from participating in the group, agents always have an incentive for bidding. Agents compute these bids by calculating the utility for each solution in Q at time t according to their utility function (Eq. 4). However, since agents do not know what other agents will bid, the second part of the utility function needs to be estimated. To do so, agents assume that the residual decisions d¬mt remain constant from t − 1 with the
Autonomous Group Formation of Heterogeneous Agents …
137
solution implemented at that time. As a consequence, their calculated utility corresponds to U (dmt , d¬mt−1 ). Once all utilities are calculated, the agents submit their highest attainable utility as a bid. Agents could engage in strategic behaviour and, for example, underbid to obtain a larger profit in terms of utility. However, second-price auctions prevent this, and ensure that agents reveal their true preferences in limited information settings [19]. The self-organization process occurs τ times over the total T time steps. One auction always occurs at the first time step t = 1, and then auctions are held in regular intervals (i.e., every T /τ time steps a new auction occurs). Depending on the number of auctions τ , group self-organization will occur more or less frequently.
2.5 Decision-Making Process Once a group is formed, each of the M members of the group is tasked with choosing a particular solution to their assigned subtask dm at each time step. Agents make their choices independently and do not communicate with other agents, nor they possess any information about other agent’s choices. To make their choices, agents estimate the associated utility using Eq. 4, assuming again that the residual decisions at t remain constant from t − 1 (see Sect. 2.4). Then, the chosen solution is the one that reports the highest utility among all alternatives. The overall solution to the task employed by the group is the concatenation of all solutions to the M subtasks. This solution has an associated performance according to Eq. 2 and determines the agents’ overall utility (as formulated in Eq. 4).
2.6 Scenarios and Sequencing We consider a total of 10 variables. Three variables determine the main scenarios of the model. As a result, there are 18 different scenarios in total. The remaining seven variables include the dependent variable, fixed parameters, and the temporal variables. The variables are summarized in Table 1. The events of the model and their sequence at each time step are provided in Fig. 2. Agents initially form the group, and then the members of the group choose a particular solution. After this solution has been implemented, all agents go through the individual adaptation process, irrespective if they are part of the group or not. Depending on τ , successive periods start at either the auction or the decision-making stage.
138
D. Blanco-Fernández et al.
Table 1 Variables of the model Type Description Exogenous variable
Observed variable Other variables
Complexity Probability of individual adaptation Number of auctions held Team performance Time steps Temporal horizon Number of decisions Weights of utility Simulation runs Number of subtasks
Denoted by
Values
K p
{3, 5} {0, 0.2, 0.4}
τ
{1, 20, 200}
Ct t T N α, β M
∈ [0, . . . , 1] ∈ [0, . . . , 200] 200 12 0.5 1,500 3
Fig. 2 Sequence of events during simulation runs
3 Results and Discussion Overall group performance as the dependent variable observed (see Table 1 is calculated according to Eq. 2. For reasons of presenting comparable results, we normalize group performance following Eq. 5. Group performance is divided by the maximum attainable performance in each scenario, and the normalized performance at each time step is averaged across 1,500 rounds of one simulation. ˜ t =
1 (dt ) . 1,500 t=1 max() 200
(5)
Figure 3 reports the mean performance at each time step for every scenario studied. To facilitate the description of the results, we denote τ = 1 as the initial self-organization
Autonomous Group Formation of Heterogeneous Agents …
139
(a)
K = 3, p = 0
(b)
K = 5, p = 0
(c)
K = 3, p = 0.2
(d)
K = 5, p = 0.2
(e)
K = 3, p = 0.4
(f)
K = 5, p = 0.4
Fig. 3 Results of the simulations. The reported measure is the mean performance of 1,500 simulation rounds (shown in the y-axis) for each time step (shown in the x-axis). This is calculated following Eq. 5. We report the results for two complexity scenarios K , three number of auctions τ , and three probabilities p
group, τ = 20 as the moderate self-organization group, and τ = 200 as the high selforganization group. Figure 3a–f represent the performance of all three groups for a particular scenario of interdependencies (i.e., K ) and individual adaptation (i.e., p).
140
D. Blanco-Fernández et al.
3.1 Results of the Simulation Experiments When interdependencies can be allocated within the subtasks (i.e., K = 3, see Fig. 1), each agent’s actions do not affect the other agents’ contributions. In the benchmark scenario of our model, agents are not able to learn (i.e., p = 0). As shown in Fig. 3a, performance for each group in the benchmark scenario initially increases but then stays constant for the remaining time steps. Since agents do not learn, they just know one solution that was endowed to them. The group will be formed at the first time step by the agents that know the highest performing solutions to their subtask. Groups will not change their chosen solution afterwards since they cannot improve performance further. Results are the same for all groups considered, irrespective of the value of τ . However, if complexity is increased, interdependencies are spread across the whole task (see Fig. 1 for K = 5). This causes an overall decrease in performance for all three groups, as seen when comparing Fig. 3a with Fig. 3b. Furthermore, there are also changes in the dynamics of performance. For a group with initial selforganization (i.e., τ = 1), the dynamics are unchanged since performance initially jumps to a level that will remain stable for all time steps. Nevertheless, groups that recurrently self-organize (i.e., τ = 20 and τ = 200) perform comparatively better than the initial self-organization group. Initially, the performance of all three groups increases as it did in the benchmark scenario. However, performance for groups that recurrently self-organize continues to increase before settling in a steady-state performance. This occurs as the existence of interdependencies allocated outside the subtasks of an agent pushes overall performance down. Since agents cannot learn, groups collectively adapt by self-organizing until eventually reaching a performance that cannot be improved given the initial solutions the agents were endowed with. Results, in turn, change substantially when individual adaptation is integrated in the simulation. For a moderate probability of learning ( p = 0.2), as represented in Fig. 3c and d, it can be observed how performance considerably increases for all three groups. For decomposed blocks (i.e., K = 3, Fig. 3c), instead of increasing immediately to a steady-state solution, group performance follows an increasing pattern until settling close to the maximum performance. However, there are significant differences in the speed at which performance grows. Figure 3c shows how recurrently self-organizing groups see their performance grow at a considerably faster pace than a group with initial self-organization only. As it was the case in scenarios of no individual learning, there is a significant drop in performance for all groups, when the complexity of the task is increased. This can be observed by comparing Fig. 3c and d. Differences can also be observed in terms of performance dynamics. In particular, the group with high self-organization performs the worst in this scenario. Figure 3d shows how it settles earlier in an underperforming steady-state solution because its performance grows initially faster than the other two groups’ performance. In contrast, the steady-state performance of the group with moderate self-organization is the highest among the three groups considered. Figure 3d also shows how the group with initial self-organization reaches a steady-state performance similar to the performance of the group with moderate
Autonomous Group Formation of Heterogeneous Agents …
141
self-organization. However, this occurs much slower. Thus, when groups face moderately complex tasks and when agents moderately learn, recurrent self-organization can be associated with a higher performance overall. It is worth noting that further increases in the probability of individual adaptation have different effects depending on the group studied. Figure 3e shows how there is a very slight increase in the speed at which performance grows over time for groups that self-organize recurrently. This increase in the speed of performance growth is more evident for the group with initial self-organization, as shown by comparing the shapes of Fig. 3c and e. Nevertheless, all groups reach the solution associated with the maximum performance of the task. When considering higher complexity levels, we identify similar effects. Figure 3f shows how the performance of the group with moderate self-organization shows no significant differences in its growth speed or overall level when compared to Fig. 3e. Groups with high self-organization see their performance slightly decreased. The most evident change occurs for the groups with initial self-organization. Figure 3f shows how the speed in performance growth increases enough to be very similar to the speed in performance growth of the group with moderate self-organization. Selforganization, thus, has no relevant effect on performance when tasks are moderately complex and individual adaptation is sufficiently high.
3.2 Discussion Since better adaptation leads to higher performances [18] (except for the benchmark scenario), our results align with the insights of the dynamic capabilities framework. In particular, following the characterization of group dynamism given by [2], we find that successful adaptation at both the individual and collective level is fundamental for improving performance. This aligns our results with the insights about adaptation at multiple levels and its relationship with overall performance as provided by [3]. This implies that collective adaptation via recurrent self-organization can improve performance under certain circumstances. This becomes evident by looking at Fig. 3: It can be observed how groups that recurrently self-organize perform significantly better than the group that does not self-organize in four of the six cases. These results, thus, have implications regarding whether group reorganization can be seen as positive or negative for performance. Our results are closer to the insights given by [6] about recurrent self-organization as a mechanism for collective adaptation that can improve task performance. Specifically, this effect is found in rugged landscapes (i.e., in scenarios in which complexity is K > 0), as revealed by [6]. This allows us to reject the notion that complete stability in group composition is always a positive factor for performance [5, 8]. Nevertheless, it is worth noting that the results of [6] consider only one particular probability of individual adaptation. This limitation in their analysis could maybe explain the differences regarding the effects of selforganization on performance (in particular, regarding the extreme cases of Fig. 3a and f).
142
D. Blanco-Fernández et al.
However, it is important to mention that increasing collective adaptation can eventually be harmful to task performance. This can be linked to the exploration versus exploitation dilemma: Adaptation is about both finding new solutions to the task and building on the solutions already known [14]. Previous evidence suggests that balancing both aspects is the key to an increase in performance [5, 11, 16, 18, 20, 21]. We can see how our results align with previous evidence on moderately complex environments. Increasing excessively the frequency of self-organization can lead to decreases in performance (as seen in Fig. 3d and f). Moreover, if individual learning is sufficiently high, self-organization has no significant impact on performance (as seen in Fig. 3f). Having said this, our results can be seen as extensions of previous results of the exploration versus exploitation dilemma. If there is “too much” adaptation, either at the individual or the collective level, performance decreases [14]. While previous research usually considered adaptation at one particular level [5, 11, 16, 20, 21], we observe how this dilemma is relevant also when the two adaptation processes coincide. Our results make a case for the study of alternative forms of adaptation, such as group self-organization, that consider heterogeneous agents. While homogeneity in agents’ characteristics is a typical assumption made in economic modeling [1], the use of computerized techniques such as the NK-model allows for more realistic assumptions such as heterogeneity in the modeled agents [21]. Heterogeneity in agents is exploited in our model via self-organization, as groups identify agents with different characteristics and change their composition as a result. This has been shown to improve performance under certain circumstances. Self-organization, thus, can be understood as a way to use the agents’ heterogeneity to the group’s advantage and increase task performance. This aspect follows previous research that has identified heterogeneity in agents as a source for improving task performance in groups [2, 8, 13–15].
4 Conclusion The improvement in computational techniques and the spread of research methods based on computerization allows to develop a model of complex task solving in groups and to drop some very restrictive modeling assumptions [21]. The most crucial assumption dropped in this paper is that of homogeneity in the agents of the model. By making the agents of the model heterogeneous, we have been able to implement a model which differs from previous research about adaptation in complex task solving contexts [5, 10–12]. The main difference is that we consider group self-organization as a collective adaptation mechanism, following [2, 6]. In this sense, our results suggest that recurrent group self-organization can be associated with improving performance: By changing its composition over time, a group adapts successfully to a particular task. These changes in group composition emerge directly from allowing agents to be heterogeneous, as they differ in how well adapted they are to the task (i.e., in the solutions known to the particular subtask they
Autonomous Group Formation of Heterogeneous Agents …
143
are assigned to). Groups, in turn, change to have the best-prepared agents in their ranks. Our results are particularly interesting as they they extend previous insights about several aspects that have been studied in previous literature by combining them into a common framework. These aspects include individual adaptation [5], collective adaptation [11, 12], the effects of self-organization [6], the exploration versus exploitation dilemma [5, 11, 14, 16, 18, 20, 21], the use of agent-based modeling to account for heterogeneity in agents [8, 9, 13, 21], and group decisionmaking [5, 16]. While we believe that this model and the associated results serve as the first step into a comprehensive analysis of the role of multi-level adaptation in complex task solving by groups, we also have identified some limitations to this paper that can be used as potential extensions of this work. These include (but are not limited to) the inclusion of endogenous decision-making about individual [5] and collective adaptation [6], the role of adaptation costs, the extension of heterogeneity in agents (by making their utility function and individual adaptation process heterogeneous among agents, for example), or the role of direct communication and coordination mechanisms between agents. Nevertheless, we consider this paper a starting point for future research that aims at better understanding the role of adaptation in complex task solving by groups.
References 1. Axtell, R.L.: What economic agents do: how cognition and interaction lead to emergence and complexity. Rev. Austrian Econ. 20(2–3), 105–122 (2007) 2. Creplet, F., Dupouet, O., Kern, F., Mehmanpazir, B., Munier, F.: Consultants and experts in management consulting firms. Res. Policy 30(9), 1517–1535 (2001) 3. Eisenhardt, K.M., Martin, J.A.: Dynamic capabilities: what are they? Strat. Manag. J. 21, 1105–1121 (2000) 4. Funke, J., Frensch, P.A.: Complex problem solving research in North America and Europe: an integrative review. Foreign Psychol. 5, 42–47 (1995) 5. Giannoccaro, I., Galesic, M., Francesco, G., Barkoczi, D., Carbone, G.: Search behavior of individuals working in teams: a behavioral study on complex landscapes. J. Bus. Res. 1–10 (2019) 6. Gomes, P.F., Reia, S.M., Rodrigues, F.A., Fontanari, J.F.: Mobility helps problem-solving systems to avoid groupthink. Phys. Rev. E 99(3), 1–10 (2019) 7. Hannachi, M., Fares, M., Coleno, F., Assens, C.: The “new agricultural collectivism”: how cooperatives horizontal coordination drive multi-stakeholders self-organization. J. Co-operative Organ. Manag. 8(2), 100–111 (2020) 8. Hsu, S.C., Weng, K.W., Cui, Q., Rand, W.: Understanding the complexity of project team member selection through agent-based modeling. Int. J. Proj. Manag. 34(1), 82–93 (2016) 9. Leitner, S.: On the role of incentives in evolutionary approaches to organizational design (2021). http://arxiv.org/abs/2105.04514 10. Leitner, S., Wall, F.: Multiobjective cecision making policies and coordination mechanisms in hierarchical organizations: results of an agent-based simulation. Sci. World J. 2014, 1–12 (2014) 11. Levinthal, D.A.: Adaptation on rugged landscapes. Manag. Sci. 43(7), 934–950 (1997) 12. Levinthal, D.A., Marino, A.: Three facets of organizational adaptation: selection, variety, and plasticity. Organ. Sci. 26(3), 743–755 (2015)
144
D. Blanco-Fernández et al.
13. LiCalzi, M., Surucu, O.: The power of diversity over large solution spaces. Manag. Sci. 58(7), 1408–1421 (2012) 14. March, J.G.: Exploration and exploitation in organizational learning. Organ. Sci. 2(1), 71–87 (1991) 15. Page, S.E.: The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools and Societies. Princeton University Press, Princeton (2007) 16. Rivkin, J.W., Siggelkow, N.: Balancing search and stability: interdependencies among elements of organizational design. Manag. Sci. 49(3), 290–311 (2003) 17. Simon, H.A.: Models of Man, Social and Rational. Wiley, New York (1957) 18. Teece, D.J., Pisano, G., Shuen, A.: Dynamic capabilities and strategic management: organizing for innovation and growth. Strat. Manag. J. 18(7), 509–533 (1997) 19. Vickrey, W.: Counterspeculation, auctions, and competitive sealed tenders. J. Financ. 16(1), 8–37 (1961) 20. Wall, F.: Adaptation of the boundary system in growing firms: an agent-based computational study on the role of complexity and search strategy. Econ. Res.-Ekon. Istraz. 1–23 (2019) 21. Wall, F., Leitner, S.: Agent-based computational economics in management accounting research: Opportunities and difficulties. J. Manag. Account. Res. (2020)
Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards: Insights from an Agent-Based Computational Economic Model Alessandro Taberna, Tatiana Filatova, Andrea Roventini, and Francesco Lamperti Abstract By 2050 about 80% of the world’s population is expected to live in cities. Cities offer spatial economic advantages that create agglomeration forces and innovation that foster concentration of economic activities, but for historic reasons cluster along coasts and rivers that are prone to climate-driven flooding. To explore tradeoffs between agglomeration economies and the changing face of hazards we present an evolutionary economics model with heterogeneous agents. Without climate-induced shocks, the model demonstrates how advantageous transport costs that the waterfront offers lead to the self-reinforcing and path-dependent agglomeration process in coastal areas. The likelihood and speed of such agglomeration strongly depend on the transport cost and magnitude of climate-driven shocks. In particular, shocks of different size have non-linear impact on output growth and spatial distribution of economic activities. Keywords Agglomeration · Path-dependency · Climate · Flood · Shock · Relocation · Migration · Agent-based model
1 Introduction Rapid urbanization and climate change exacerbate natural hazard risks worldwide. In the stable climate, which humanity has enjoyed for centuries, coastal and delta regions historically grew faster than landward areas, with all current megacities flourishing along the coast. The richness of natural amenities and resources coupled with transportation advantages created agglomeration forces that have enabled this boom [14]. Yet, the escalation of climate-induced hazards fundamentally reshape A. Taberna (B) · T. Filatova Faculty of Technology, Policy and Management, Delft University of Technology, Jaffalaan 5, 2628, BX, Delft, The Netherlands e-mail: [email protected] A. Roventini · F. Lamperti Institute of Economics, Scuola Superiore Sant’ Anna, piazza Martiri della Libertá 33, 56127 Pisa, Italy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_12
145
146
A. Taberna et al.
the trade-offs which firms and households have to take while choosing a location [6]. Increasingly, managed retreat becomes plausible for all types of coasts even under low and medium sea level rise scenarios [5], raising a hot debate on how to make this a positive transformation. Understanding the location and agglomeration of productive activities has been at the core of spatial economics for almost two hundred years [27]. The “new economic geography” [20] literature has proposed a coherent analytical framework grounded in general equilibrium analysis of the spatial distribution of economic activities. New economic geography linked international trade and economic geography giving rise to models that produce emergent spatial structures without any assumed agglomeration economies [19]. These models traditionally assume a unique equilibrium and rational representative agents with perfect information. Yet, heterogeneity of technologies, resources and preferences as well as the fundamental uncertainty necessitating dynamic expectations and adaptive behavior [1], challenge these assumptions. Agent-Based Models (ABMs) has risen as a method to accommodate heterogeneity, learning, interactions and out-ofequilibrium dynamics [26], also in environmental and climate change economics [2, 22] and economic geography [13, 23]. ABMs are increasingly versatile in modeling disaster scenarios, and flooding in particular [24]. However, an ABM of an economy shaped by locations of economic activities and agglomeration forces and altered by climate-induced risks is missing. To address this gap, we design a model to study the spatial distribution of economic agents, both firms and households, in face of the costliest climate-induced hazard: flooding. Our goal is to explore how the complex trade-offs between agglomeration economies and a changing severity of locationspecific flood hazards affect the economic performance and attractiveness of regions and steer their development. In particular, we aim to address two research questions: (1) How do agglomeration forces shape economic centers in coastal areas? (2) What are the effects of climate shocks of various severity on this agglomeration dynamics? Following previous work on evolutionary macroeconomic ABMs [9, 11], we use R&D investment and a “Schumpeterian” creative (innovative) destruction process as the engine of economic growth. Our model is characterized by two spatial regions, safe Inland and hazard-prone Coastal, and explores the economic dynamics in the two regions under different climate shocks. Our simulation results show that in the absence of floods when the Coastal region holds a natural spatial advantage, such as being on a transportation route and hence paying a lower transportation cost to trade with the rest of the world (RoW), it will experience an inflow of economic activities from the Inland region. The likelihood and the speed of such agglomeration strongly depend on the extent of the location advantage. In particular, in line with empirical evidences, our model confirms that, because of the trade stickiness, the concentration of economic activities decreases as transport costs increase [25]. Nonetheless, when climate shocks are introduced, they play an important role on the final distribution of economic activities between the regions as well as the economic growth of the whole economy.
Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards …
147
Fig. 1 A stylized representation of the model
2 Methodology Previous attempts of ABMs for modelling agglomeration dynamics in an out-ofequilibrium fashion highlighted the need to depart from the neoclassical economic framework [13, 23]. Hence, we adopted the evolutionary economic engine of a well validated macroeconomic ABM, the “Keynes + Schumpeter” [8–11, 21]. Furthermore, we added two differentiated spatial regions, migration actions and climate hazards. The two regions, namely Coastal and Inland, feature a two-sector economy with three classes of heterogeneous interacting agents. Specifically, the economy of region r consists of heterogeneous boundedly-rational Fir capital-good firms (CP Firm agents), F jr consumption-good firms (CS Firm agents) and L rh Households agents (consumers/workers). Capital-good firms produce heterogeneous machines and invest in R&D to stochastically discover more productive technologies. Consumption-good firms combine labour and machineries bought from the capital sector to produce a final homogeneous consumer product. There are two local labour
148
A. Taberna et al.
markets, hence firms can only hire workers from their own region. In contrast, the goods market is global: firms from both sectors are able to sell in the other region and export to the rest of the world economy bearing a regional and international iceberg transport cost respectively. Furthermore, all agents are mobile and can migrate across the two regions. Migration is costly and increasing with size for firms, while is costless for workers. A one-time climate shock hits the Coastal region at time ts, with average magnitude Dc. We model climate damages heterogeneously at the microeconomic level, hitting workers’ labor productivity, capital stock and inventories of firms. Households reduce consumption to undertake repair costs. Figure 1 provides a schematic representation of the model dynamics.
2.1 The Capital Good Sector The structure of the capital-good sector in each country takes the basic form of the K + S model [9]. Each firm i is endowed with labor productivity (Aiτ , Biτ ). The former coefficient indicates the productivity of the machines produced by firm i, while the latter stands for the productivity used by firm i to produce its machines. Capital-good firms determine their price by applying a fixed markup μ1 to their unit cost. The unit cost is the ratio between individual nominal wage and productivity coefficient. Capital firms aim to improve both their productivity coefficients. To do so, they actively invest in R&D a fraction ν of their past sales. Furthermore, firms split their R&D between innovation (I N ) and imitation (I M) according to the parameter ξ ∈ [0, 1] and follow a two steps procedure. In both cases, the first step is a draw from a Bernoulli distribution, θiin (t) = 1 − e−ζ1 I Ni (t) for innovation and θiim (t) = 1 − e−ζ2 I Mi (t) for imitation, which determines whether the firm i gets access to the second step with 0 ≤ ζ1,2 ≤ 1. Hence, the probability of a positive outcome depends on the amount of resources invested. If the innovation draw is successful, the firm discovers a in in A new set of technologies, (Ain i , Bi ), according to Ai (t) = Ai (t)(1 + x i (t)) and in B A,B Bi (t) = Bi (t)(1 + xi (t)). x (t) are independent draws form a Beta(α1 , β1 ), over the support [x1 , x2 ], with x1 ∈ [−1, 0] and x2 ∈ [0, 1]. The supports of the Beta distribution determine the probability of “succesfull” over “failed” innovations, and hence shape the landscape of technological opportunities. Furthermore, firms passim ing the imitation draw get access to the technology of one competitor (Aim i , Bi ). Notably, firms are more likely to imitate competitors located in the same region and with similar technologies. The higher the technological distance with a specific firm, computed with an Euclidean metric, the lower the probability to imitate its technology. Moreover, we augmented the technological distance of firms located in different regions by a factor > 0. Once both processes are completed, all the firms succeeding in either imitation or innovation select the most efficient production technique they can master according to a payback rule. Finally, capital-good firms send a
Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards …
149
“brochure” containing price and productivity of their machines to a random samples of potential new clients (N Ci ) as well as its historical customers (H Ci )1 .
2.2 The Consumption Good Sector Consumption-good firms combine labour and capital with constant returns to scale to produce a homogeneous good. In line with K+S tradition [9], adaptive demand expectations (D ej = f [D j (t − 1), D j (t − 2), . . . , D j (t − h)]), desired inventories (N dj ), and the actual stock of inventories (N j ) form the desired level of production (Q dj (t)). The latter is constrained by firms’ capital stock K j , with a desired capital stock K dj required to produce Q dj . In case K dj (t) > K j (t), the firm calls for a desired expansionary investment such that: E I jd (t) = K dj (t) − K j (t)
(1)
Furthermore, firms undertake replacement investment R I , scrapping machines with age above η > 0 and those that satisfy a payback rule. Firms then compare the “brochures” received by capital-good firms and order the machines with the best ratio between price and quality. Notably, consumption-good firms have to advance both their investments and the worker wages. This implies that capital markets are imperfect. As a consequence, external funds are more expensive than internal ones and firms may be credit rationed. More specifically, consumption- good firms finance their investment first by using their stock of liquid assets (N W j ). When the latter does not fully cover investment costs, firms that are not credit-constrained can borrow the remaining part paying an interest rate r up to a maximum debt/sales ratio of > 1. Each firm is characterized by heterogenous vintages of capital-goods with different average productivity (A j ) which reflects in its unit cost of production (c j ): c j (t) =
w j (t) Aj
where w j is the average wage paid by firm j. The prices in the consumption-good sector are computed applying a mark-up (μ2, j ) on unit cost: p j (t) = (1 + μ2, j )c j (t) The evolution of firm’s market share ( f j ), determines the variation of its markup f (t−1)− f (t−2) (μ2, j ): μ2, j (t) = μ2, j (t − 1)(1 + ν j f j (t−2)j ) with 0 ≤ ν ≤ 1 Consumption-good firms compete in three markets, namely the Coastal (Co), the Inland (I n) and the export (E x p). In a generic market m, firm’s competitiveness (E j ) depends on its price, which can account for inter-regional (τ1 ), international (τ2 ) transport costs, as well as on the level of unfilled demand (l j ): 1
For additional detail about the capital good sector see [9]
150
A. Taberna et al.
E mj (t) = −ω1 p mj (t)(1 + τ1 + τ2 ) − ω2 l mj (t) with ω1,2 > 0, m = Co, I n, E x p (2) In ) and Inland (E ) market, τ = 0, while they pay Of course, in the Coastal (E Co 2 J j no transport cost to compete in the region where they are located. Regarding the export market, according to the spatial economics literature that indicates ports as Exp hub for international trade [14, 15], we model the competitiveness (E j ) so that firms located in the Coastal region holds a competitive advantage in trade with the rest of the world, i.e. τ1 = 0, while Inland firms bear it. m In each market (m), the average competitiveness (E ) is calculated by averaging the competitiveness of all firms in the corresponding region weighed by their market share in the previous time step: m
E (t) =
F2
E mj (t) f jm (t − 1) with m = [Co, I n, E x p]
(3)
j=1
The market shares ( f j ) of firms in the three markets evolve according to a quasireplicator dynamics: f jm (t)
=
f jm (t
m
− 1) 1 + χ
E mj (t) − E (t) m
E (t)
with m = [Co, I n, E x p], (4)
with χ > 0 which measures the selective pressure of the market. In a nutshell, the market shares of the less efficient firms shrinks, while those of the more competitive ones increases (due to lower prices and less unfilled demand). The firm’s individual demand in each market is then calculated by multiplying its market share by the total demand. For the two regions, the latter is computed by summing up all the wages and unemployed benefits of their households. Conversely, we assume that the export market grows at a constant rate: E x p(t) = E x p(t − 1)(1 + α), α > 0.
2.3 Labour Market Consumption-good firms in the Coastal and Inland zones offer heterogeneous wages which depends on their productivity, as well as on regional productivity, inflation and unemployment: w j (t) = w j (t − 1) 1 + ψ1
AB j (t) AB r (t) U r (t) cpi r (t) + ψ3 r + ψ2 + ψ4 , AB j (t − 1) U (t − 1) cpi r (t − 1) AB r (t − 1)
(5) where r is the region where firm j is located, AB j is its individual productivity, AB r is the regional productivity, cpi r is the regional consumer price index and U r is the
Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards …
151
local unemployment rate. Furthermore, capital-good firms follow the wage dynamics of top-paying consumption firms, as in [10, 11]. Interactions in the local labor markets are decentralized. This process allows to take into account unemployment as a structural disequilibrium phenomenon. As we assume no commuting, households can only work for the firms in the same region where they live. Thus, on the one hand, the labor supply L S,r of region r at time t, is equal to the number of households living in that region. On the other hand, the aggregate labour demand L D,r is given by the sum of individual firms labour demand: F1r F2r D,r L df with f = [i, j], (6) L (t) = i=1 j=1
where F1r and F2r are the populations of capital- and consumption-good firms located in region r . The labour demand of capital-good firm i (L id ) is equal to: Qor (t) L id = B τi(t) where Qoi is the quantity ordered to the firm. Similarly, the labour i
Qd (t)
demand of consumption-good firm j (L dj ) is computed as: L dj = A j j(t) where Qdi is its desired production. The labour market matching mechanism operates as follow: 1. If L df (t) > n f (t), where n f (t) is the current labour force of a generic firm f , the firm posts m vacancies on the labour market, with m = L df (t) − n f (t). Conversely, if L df (t) < n f the firm fires m employees. 2. Unemployed households are boundedly-rational and have imperfect information. They are aware of a fraction ρ ∈ [0, 1] of all vacancies posted by the firms in their home region. 3. Unemployed households select the vacancy with highest offered wage in their sub-sample and they are hired by the firm. The process is completed when either all the households are employed or the firms have hired all the workers they need. Note that there is no market clearing and and involuntary unemployment as well as labor rationing are emergent properties generated by the model.
2.4 Migration Households and firms can move to the other region. To capture heterogeneous location preferences and imperfect information about regional variables such as wage levels, we model migration as a probabilistic two-step procedure. In the first step, an agent compares several indicators between the two regions, and she/he does consider to migrate only if the region where it is not currently located displays better economic conditions. The probability to migrate depends on a switching test (see [3, 4, 7]) grounded on economic variables. Each household h compares wages and levels of unemployment in two regions and its probability to migrate (Pr ) is equal
152
A. Taberna et al.
1 − e(Wd (t)) , if Wd (t) and Ud (t) < 0 Prh (t) = . 0, Otherwise
to:
(7)
Wd is the wage distance which captures the average salary difference between the two r ∗ (t−1)) where r is the region where the agent is located regions: Wd (t) = (W (t−1)−W W r (t−1)) and ∗ is the other one. Similarly, the unemployment distance Ud reads: Ud (t) = (U ∗ (t−1)−U r (t−1)) The mobility choices of firms depends on the local regional demand U r (t−1) for their goods, in line with the New Economic Geography models where firms move towards bigger and more profitable markets [20]. More specifically, a generic firm f , calculates the probability to migrate according to: Pr f (t) =
1 − e(ω1 Dd f (t)+ω2 D Ad(t)) , if Dd f (t) and D Ad(t) < 0 , 0, Otherwise
(8)
where ω1 + ω2 ≤ 1. Dd is the demand distance of firm f between the two regions: (Dr (t−1)−D ∗f (t−1)) Dd f (t) = f Dr (t−1)) f Firms also consider the dynamics of their sales with the “Demand (D Ad rf (t−1)−D Ad ∗f (t−1)) where D Ad r,∗ attractiveness” (D Ad): D Ad f (t) = f (t − 1) = D Ad r (t−1) f
r,∗ log(s r,∗ f (t − 1)) − log(s f (t − 2)) and s f are individual firm sales. The economic agents that consider whether to migrate (Pr > 0) perform a draw from a Bernoulli distribution. If the draw is successful, the agent migrates to the other region. Households that pass both steps leave their job (if employed) and move to the other region as unemployed. Migrant firms fire all their employees, paying a fixed cost that is equal to the sum of their quarterly wages.
2.5 Climate In a predetermined time step, ts a single climate shock, can hit the Coastal region. The hazard is rather stylized, yet for the type of consequences on the economy can be intended as a flood. The shock is heterogeneous among the Coastal region agents. In particular, each agent draws an individual damage coefficient Dc from a Beta(α2 , β2 ) distribution. The flood affects Coastal firms (indexed here generically as c f ) in the following ways: • A one period labour productivity loss AB f c (ts ) = AB f c (ts − 1)(1 − Dcc f ). • Each vintage of the capital stock go through a draw from a Bernoulli distribution; Dc θτ c f (ts ), whenever the draw is successful the vintage is destroyed. • A permanent destruction of a fraction of the inventories I N V f c (ts ) = I N V f c (ts − 1)(1 − Dcc f ). Also, for one step, each Coastal households (ch) decrease their consumption to undergo repair cost in the form Cch (ts ) = C(ts )(1 − E(Dc)ch ).
Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards …
153
2.6 Timeline of Event In any given time period (t), the following actions take place in sequential order: 1. Households and firms can consider to migrate across regions. 2. Firms in the capital-good sector perform R&D. 3. Consumption-good firms set their desired production, wages, and, if necessary, order new machines. 4. Decentralized labor market opens in each region. 5. An imperfect competitive consumption-good market opens. 6. Entry and exit occur. 7. Machines ordered are delivered. 8. There is a probability of climate shock in Coastal region.
3 Results 3.1 Agglomeration Dynamics At the beginning of each experiment, economic activities and population are evenly distributed between the two regions and each agent begins with exactly the same initial conditions. Therefore, the only difference between Coastal and Inland region is the regional transport cost (τ1 ) that Inland firms have to consider when calculating export competitiveness (Eq. 2). We start describing the dynamics of our economy without climate hazard. Simulations results show that the model is characterized by a self-reinforcing and path dependency agglomeration process triggered by endogenous technical change. Namely, the discovery of newer and more productive technologies by capital-good firm R&D investments. Because of transport cost and physical distance, consumption-firm located in the same region where the innovation took place are more likely to absorb it. Furthermore, since salaries are indexed to both individual and regional productivity, the region that innovates more will also have the higher average salary. The latter is an attractor of households migration which ultimately results in shift in regional consumption, making the environment less favorable for the firms located in the region with less population. Hence, even more firms will decide to migrate, altering even further job opportunities and wages level. This process continues until all economic activities and population are concentrated in one region. As in new economic geography models [20], labour mobility plays the central role in the agglomeration process. In line with the empirical evidences [12], our evolutionary economic model endogenously determines the direction and speed of the labour mobility triggered by the process of R&D investments and a relative increase in wages in the most technologically-advanced core region versus the periphery. Moreover, since the technological change is stochastic the agglomeration process will materialize in the region which is more “lucky” in the discovery
154
A. Taberna et al.
Fig. 2 Population and GDP in Stat. Equation I (left) and Stat. Equation II (right). Note that the Coastal region is used as reference
of newer and more productive technologies. Thus, as typical in complex systems the model produces a non-ergodic behavior characterized by two statistical equilibria: a complete agglomeration of economic activities and population in either Coastal (Eq. 1) or Inland (Eq. 2) region (Fig. 2). Importantly, the likelihood and speed of the equilibria depend on model calibration. As common in spatial economics, a parameter that plays a major role on the final results is the iceberg transport cost. The regional transport cost affects model dynamics through market competitiveness (Eq. 2). In particular, since competitiveness is inversely proportional to the values of the regional transport cost, the higher the transport cost the more difficult is for the firms to be competitive, and hence gain market share, outside the region where their are located. This has two main implications. The first is on the speed of the process. To consider migration, firms need to have a positive and increasing demand distance (Eq. 8). Yet, the regional transport cost functions as a barrier in inter-regional trade, making it harder for firms to sell outside their region. The second effect is on the likelihood between the two statistical equilibria. As the regional transport cost also measures the competitive advantage that the Coastal firms have in trade with the rest of the world, the higher is this cost, the harder is for Inland firms to be competitive on exports. Furthermore, the lower the competitiveness, the lower the share of export demand allocated to
Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards …
155
Table 1 The effects of different transport costs on the speed and the likelihood of final Stat. Eq τ1 Fraction of MC runs Stat. Eq 1: Stat. Eq. 2: that have reached final Agglomeration coastal agglomeration Inland state before step 600 region (%) region (%) 0 0.01 0.02 0.03 0.04 0.05
1 0.71 0.45 0.33 0.27 0.22
51 65 73 76 81 84
49 35 27 24 19 16
the Inland region. In particular, lower demand means less investments in R&D and slower technical change. To analyze the effect of transport cost on agglomeration and to remove the effects of stochastic components, we implement a Monte Carlo (MC) exercises of size 500 on the seed of pseudo random number generator. We will use the same protocol for all the simulation results. In this exercise, we measure the speed, counting the fraction of MC simulations that have reached one of the final equilibria after 600 steps (Table 1). Since a step can be intended as a quarter, the time span of the simulation is 150 years. However, the first 20 years serve as transition phase. Furthermore, the likelihood between final equilibria is calculated only among the sub-sample of MC that have reached a final equilibria. We decide to consider only the latter since if we run the model for an infinite number of steps, one of the two final equilibria will always emerge. With zero transport cost there are no idiosyncratic differences between the two regions. As expected, since there are no trade barriers, firms easily penetrate outside their regional market and speed up the agglomeration process with all the runs that are agglomerated in either region before step 600. Moreover, with no transport cost the Coastal region has no competitive advantage in trade with export and hence the likelihood between the equilibria is the same with a completely stochastic agglomeration (Table 1). Notably, as the transport cost increases, the trade between the two regions decreases. Consequently, resulting in a slower agglomeration process (only 0.22 with τ1 = 0.05). Further, as anticipated before, among these smaller samples, the likelihood of Eq. 1 constantly increases. This is due because the competitive advantage in export increases as the transport cost does (Table 1). However, export are only a small fraction of the internal demand. Hence, the gap in resources is not sufficient to completely remove the possibility of Eq. 2.
156
A. Taberna et al.
3.2 Agglomeration Dynamics and Climate Hazard As second exercise, we analyze the impact of a single climate shock on the agglomeration process. In this experiment, the regional transport cost (τ1 ) is equal to 0.01 and the climate hazard hitting the Coastal region is placed at the end of transition phase (80th step). Furthermore, we perform a set of 10 experiments, where the damage coefficient (Dc) ranges from 0 to 0.5, with intervals of 0.05. Our goal is to understand the response of our economy to shocks of different size. Despite being modelled in a stylized manner, such shocks deliver important insights on the feedbacks between climate, economy and the agglomeration forces. In particular, as for transport cost, we look at the impact of climate dynamics on both speed and likelihood between equilibria. In addition, we also consider the flood effect on the whole economy (defined as the sum between Coastal and Inland regions). The results of this set of experiments can be summarize in Fig. 3. A first takeaway is that the lower half of the shocks (0–0.25) has not particular influence on the fraction of MC runs that reach the final state before step 600. Conversely, the latter increases rapidly as the average shocks exceeds 0.25 (Fig. 3-left panel). The reason behind this behavior is that the hazard affects regional productivity both directly and indirectly. In the former case, through the one period productivity cut equal to average size of the shock. While for the latter, the partial destruction of capital stocks forces firms to buy newer machines with different productivity coefficient. In this sense, the shock can act as the trigger for the self-reinforcing and path dependency agglomeration process that characterizes the model, usually generated by endogenous technical change and innovation. However, a mild shock does not systematically produce a sufficient regional productivity gap able to start the agglomeration. Conversely, this effect becomes more evident as the size of Dc passes 0.25, with almost the totality of (0.92) MC sample reaching either equilibria with the highest shock (Dc = 0.5). A second important result is the presence of non-linearity between the size of the shock (Dc) and the likelihood of statistical equilibria as well as global economic performance. In particular, the model displays an higher both probability of agglomeration in Coastal region and overall GDP growth with the presence of a small shock than without a shock. However, as the damage coefficient increases the probability of agglomeration in Coastal region decreases steeply as well as the positive effects on overall economic growth fade out (Fig. 3-central and right panels). The reason behind such non-linearity is the complex interplay between two forces caused by the natural hazard, defined here as “disruptive effect” and “creative destruction effect” [16]. On the one hand, the latter refers to positive economic effect following a natural disaster, which despite being initially counter-intuitive, it is not uncommon in empirical studies [17, 18]. In our model it is generated by the “forced” investment that firms have to undertake following the climate shock. Firms have to replace their capital destroyed with newer and more productive technologies by anticipating future investment. This leap forward, boosts Coastal region productivity and hence the aggregate economy. On the other hand, the former reflects the negative effects that the shock causes to the economy, such as temporal drop in productivity and consumption. Moreover,
Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards …
157
Fig. 3 Response of speed (left), likelihood between equilibria (center) and global GDP growth (right) to shocks of different magnitude. Results are from a MC of size 500
firms are resources constrained and they might not be able to replace their capital stock entirely. Notably, as long as the shock is mild, the “creative destruction” prevails. Since only a small fraction of capital stock and market is affected, firms can afford to buy brand new machines, increase productivity, wages and the likelihood of agglomeration Coastal region (Fig. 3-central panel). Moreover, such technological jump has positive “hysteretic effects” on GDP that display statistically significant higher growth rate when Dc < 0.15 (Fig. 3-right panel). However, as the Dc coefficient goes up, so does the fraction of capital destroyed and firms are not able to buy it back entirely because it is beyond their financial capabilities. Thus, they are forced to reduce output and fire the surplus of workers. Moreover, they are not able to fulfill the whole demand, mining long-term competitiveness. The more the firms
158
A. Taberna et al.
Fig. 4 Population dynamics in response of shocks of different magnitude. Results are the average of MC simulation of size 500
are constrained in production, the higher is the increase of unemployment rate in the Coastal region. This increase of unemployment rate coupled with the temporal decrease of productivity and wages triggers a households migration towards the inland region. As households redistribute between the two regions, also consumption does. Therefore, the volume of out-migration in the aftermath of the natural hazard is crucial to determine future region attractiveness. As shown in Fig. 4, when more than 5% of the total household population leaves the Coastal region (Dc ≥ 0.25), the demand distribution between the two regions reaches a tipping point that makes the Inland region more attractive for further firms migration. Importantly, for very high values, (DC ≥ 0.45), the shock not only substantially affects the distribution of economics activities, but it also compromises the development of the whole economy (Fig. 3-right panel). In particular, the economy displays a negative “hysteresis” characterized by a statistically significant lower GDP growth.
Exploring Regional Agglomeration Dynamics in Face of Climate-Driven Hazards …
159
4 Conclusion The amount of people and assets exposed to climate hazards is increasing as a consequence of both urbanization and changing climate. In this work, we have investigated the macroeconomic and spatial consequences of heterogeneous climate shocks in a theoretical agent-based computational economic framework. The model is characterized by two regions, and three classes of mobile agents, namely capital-good firms, consumption-good firms and households that interact in goods and labour markets. We have experimented two types of scenario first, we muted climate shocks and showed the ability of the model to reproduce the self-reinforcing and pathdependency agglomeration process typical of economic geography model. Importantly, individual investment choices and technical change trigger such process, reinforcing previous literature findings about the correlation between productivity and agglomeration forces. Furthermore, MC simulation results displayed nonergodic properties, yet a more likely concentration of economies activities in coastal areas because of their competitive advantage in trade with the rest of the world. We have investigated the role of the transport in these dynamics. In the second scenario, we introduced heterogeneous climate shocks hitting the Coastal region. We found a non-linear response of both spatial distribution of economic activities and global macroeconomic indicators to the size of the shock. In particular, a small shock increased statistically significant the probability of agglomeration in the Coastal region as well as the output of the whole economy. Conversely, big shocks boost agglomeration in the Inland safe region, but also generate negative “hysteretic” effects on global economic growth. Such non-linearity depends on the complex interplay of two forces that we defined as “creative destruction effect” and “disruptive effect”. The model showed encouraging first results on the trade-off between natural disasters and agglomeration economies. Moreover, it advanced the economic geography literature by exploring spatial distribution of economic activities in a out-of equilibrium fashion. Further research could include, but not limited to, calibration with empirical data, more realistic representation of natural hazards, introduction of additional industries and multi-level climate change adaptation actions to reduce harm from the shocks.
References 1. Arthur, W.B.: Inductive reasoning and bounded rationality. Technical report (1994) 2. Balint, T., Lamperti, F., Mandel, A., Napoletano, M., Roventini, A., Sapio, A.: Complexity and the economics of climate change: a survey and a look forward. Ecol. Econ. 138(649186), 252–265 (2017, 8) 3. Bosco, B., Delli Gatti, D.: Heterogeneity in space: an agent-based economic geography model. Technical report 4. Caiani, A., Russo, A., Palestrini, A., Gallegati, M., Modeling, A.-b.: Economics with Heterogeneous Interacting Agents
160
A. Taberna et al.
5. Carey, J.: Managed retreat increasingly seen as necessary in response to climate change’s fury. Proc. Natl. Acad. Sci. USA 117(24), 13182–13185 (2020, 6) 6. Coronese, M., Lamperti, F., Keller, K., Chiaromonte, F., Roventini, A.: Evidence for sharp increase in the economic damages of extreme natural disasters. Proc. Natl. Acad. Sci. USA 116(43), 21450–21455 (2019, 10) 7. Delli Gatti, D., Gallegati, M., Greenwald, B., Russo, A., Stiglitz, J.E.: The financial accelerator in an evolving credit network. J. Econ. Dyn. Control 34(9), 1627–1650 (2010) 8. Dosi, G., Fagiolo, G., Napoletano, M., Roventini, A.: Income distribution, credit and fiscal policies in an agent-based Keynesian model. J. Econ. Dyn. Control 37(8), 1598–1625 (2013, 8) 9. Dosi, G., Fagiolo, G., Roventini, A.: Schumpeter meeting Keynes: a policy-friendly model of endogenous growth and business cycles. J. Econ. Dyn. Control 34(9), 1748–1767 (2010, 10) 10. Dosi, G., Pereira, M.C., Roventini, A., Virgillito, M.E.: Causes and consequences of hysteresis: Aggregate demand, productivity, and employment. Ind. Corp. Change 27(6), 1015–1044 (2018, 12) 11. Dosi, G., Pereira, M.C., Roventini, A., Virgillito, M.E.: The effects of labour market reforms upon unemployment and income inequalities: an agent-based model. Socio-Econ. Rev. 16(4), 687–720 (2018, 10) 12. Feldman, M.P., Kogler, D.F.: Stylized Facts in the Geography of Innovation, 1st edn., vol. 1. Elsevier BV, (2010) 13. Fowler, C.S.: Taking geographical economics out of equilibrium: implications for theory and policy. J. Econ. Geogr. 7(3), 265–284 (2007) 14. Fujita, M., Mori, T.: The role of ports in the making of major cities: self-agglomeration and hub-effect. J. Dev. Econ. 49(1), 93–120 (1996, 4) 15. Glaeser, E.: Agglomeration economics (2010) 16. Hallegatte, S., Dumas, P.: Can natural disasters have positive consequences? Investigating the role of embodied technical change. Ecol. Econ. 68(3), 777–786 (2009, 1) 17. Hallegatte, S., Przyluski, V.: The economics of natural disasters concepts and methods. Technical report (2010) 18. Klomp, J., Valckx, K.: Natural disasters and economic growth: a meta-analysis. Glob. Environ. Change 26(1), 183–195 (2014, 5) 19. Krugman, P.: Geography and Trade, vol. 1. MIT Press Books (1992) 20. Krugman, P.: What’s new about the new economic geography? Oxf. Rev. Econ. Policy 14(2), 7–17 (1998) 21. Lamperti, F., Dosi, G., Napoletano, M., Roventini, A., Sapio, A.: Faraway, so close: coupled climate and economic dynamics in an agent-based integrated assessment model. Ecol. Econ. 150, 315–339 (2018) 22. Lamperti, F., Mandel, A., Napoletano, M., Sapio, A, Roventini, A., Balint, T., Khorenzhenko, I.: Towards agent-based integrated assessment models: examples, challenges, and future developments. Regional Environ. Change 19(3), 747–762 (2019, 3) 23. Spencer, G.M.: Creative economies of scale: An agent-based model of creativity and agglomeration. J. Econ. Geogr. 12(1), 247–271 (2012) 24. Taberna, A., Filatova, T., Roy, D., Noll, B.: Tracing resilience, social dynamics and behavioral change: a review of agent-based flood risk models. Socio-Environ. Syst. Model. 2, 17938 (2020, 12) 25. Tabuchi, T.: Historical trends of agglomeration to the capital region and new economic geography. Regional Sci. Urban Econ. 44(1), 50–59 (2014, 1) 26. Tesfatsion, L., Judd, L.K.: Handbook of Computational Economics: Agent-based Computational Economics. Elsevier, New York (2006) 27. von Thünen, J.: The Isolated State, English Pergamon Press, London (1826)
Dynamics of Wealth Inequality in Simple Artificial Societies John C. Stevenson
Abstract A simple generative model of a foraging society generates significant wealth inequalities from identical agents on an equal opportunity landscape. These inequalities arise in both equilibrium and non-equilibrium regimes with some societies essentially never reaching equilibrium. Reproduction costs mitigate inequality beyond their affect on intrinsic growth rate. The highest levels of inequality are found during non-equilibrium regimes. Inequality in dynamic regimes is driven by factors different than those driving steady state inequality. Evolutionary pressures drive the intrinsic growth rate as high as possible, leading to a tragedy of the commons. Keywords Artificial society · Wealth inequality · Population-driven dynamics · Natural selection
1 Introduction Current studies on wealth inequality use many different approaches: stationary distributions [1], geometric Brownian motion [2], models calibrated to actual economies [3], minimal models of a system [4], to cite just a few. Common to these modelling approaches are assumptions about the dynamics of the process, for example geometric Brownian motion (GBM) and random exchanges of assets [5]. Since in GBM models the wealth distribution never reaches equilibrium and ends up concentrating all the wealth in a diminishing cohort [4], mean reversion or wealth redistribution terms must be added [3]. Rather than assume processes and then fit to empirical data, this research builds a minimal generative model for a system that makes no assumptions about the underlying behaviors or processes other than the population-driven struggle for existence [6, 7]. The purpose is to investigate whether wealth inequality will emerge in societies with equal agents and equal opportunities, and if so, what drives these inequalities. J. C. Stevenson (B) Independent, Long Beach, NY, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_13
161
162
J. C. Stevenson
While this aggregation of simple individuals may barely qualify as a society or an economy, the agents interact with each other through the competition for resources (to survive) and space (to reproduce). From these interactions, complex behaviors of population-driven ecologies [8] and unequal wealth distributions emerge. These wealth distributions show inequalities that are dependent on the intrinsic growth rate of the population, the cost of reproduction, and whether the economy has reached equilibrium. Furthermore, by allowing natural selection to act at the individual level [9], the population responds with a classic tragedy of the commons [10]. Epstein and Axtell’s model [11] is simplified by specifying identical agents on an equal opportunity (flat) landscape.1 As in the study of bacteria [7], the population trajectory begins with a single agent and, in these cases, is dependent on two growth characteristics, infertility and birth cost. Population trajectories from two constant parameter scenarios are used in the discussions of societies with growth in the stable regimes: one with “Low Fertility” and the other with significant “Birth Cost”. Two other trajectories with evolving parameter scenarios are discussed: one in a stable growth regime “Evolved Stable”, and one in a chaotic growth regime: “Evolved Chaotic”. Specific details for these four scenarios are provided in Table 1, and detailed descriptions of the parameters and processes of the simple artificial society model are given in Appendix A. This simplified, population-driven model admits comparison with equation-based continuum modeling of single species populations, developed in the fields of mathematical biology and ecology [8, 14]. These comparisons validate the dynamics of the model, allow calculation of the intrinsic growth rate based on the family of Verhulst Processes [14, 15], and define the various population level regimes of stable, oscillatory, and chaotic. Details of these comparisons and calculations are given in Appendix B. Table 1 Artificial Society Scenarios Scenario
f
bc
r
K
mA
tW
G 0.53 ± 2.8%
Low fertility
85
0
0.018 ± 0.4% 837 ± 0.9%
473 ± 9.7%
4767 ± 4.7%
Birth cost
10
40
0.032 ± 0.6% 824 ± 0.4%
2264 ± 3.2%
16254 ± 2.2% 0.33 ± 2.9%
Evolved stable 10
0
0.170 ± 0.5% 859 ± 3.9%
12.7 ± 4.2%
504 ± 14%
0.688 ± 3.9%
Evolved chaos
0
1.542 ± 14%
1.41 ± 19%
480 ± 79%
0.73 ± 14%
1
916 ± 53%
Scenario simulation parameters infertility f , birth cost bc, the resultant intrinsic growth rate r , carry capacity K , mean age m A, total wealth t W , and Gini Coefficient G. Each scenario was run 100 times with different random seeds. Statistics were collected after the simulations had reached steady state (such as it is for the chaotic regime) 1
Epstein and Axtell [11] [pgs. 32–37,122] detailed wealth inequalities and identified but did not pursue potential investigations. Others [12, 13] considered more complex, evolving configurations without first addressing the underlying sensitivities and dynamics.
Dynamics of Wealth Inequality in Simple Artificial Societies
163
2 Wealth Inequality Dynamics Additional measures were used to develop distributions of the individuals’ ages, surplus resources (wealth), and the deaths per cycle. These additional measurements enable a determination of steady state (equilibrium) and dynamic (non-equilibrium) conditions, allow detailed histories of individual and total wealth over time,2 and provide insight into both the relationship of mean age and mean death rate to the implied, intrinsic growth rate; and into the nature of the society’s wealth.
2.1 Dynamic Relaxation Times Figure 1a shows the elite (top 10%) and overall mean age and Fig. 1b shows the population level and Gini Coefficient; both figures are over time and are for the two constant parameter scenarios. While these populations reach and sustain their carry capacity levels quite quickly, achieving equilibrium, as measured by mean age and Gini Coefficient, takes much longer (for Low Fertility over 35,000 cycles, an impractical length of time). The initial agents (founders), born into a underpopulated and rich landscape, have a tremendous advantage in building up their personal wealth before the population reaches the actual carrying capacity. Once that carry capacity has been reached, equilibrium is not achieved until these founding agents have given up their surpluses and expired. Figure 2 shows the wealth histories for the early founders of these two societies. The relationships of inequality and mean age to the growth parameters are substantially different between a society at steady state versus one still in transition.
2.2 Equilibirum Inequality and Sensitivities While Low Fertility highlights the lack of equilibrium, Birth Cost shows that even societies that do attain equilibrium in a reasonable time have significantly unequal wealth distributions. Figure 3a shows, at equilibrium, the relationship of total wealth to mean age (a proxy for intrinsic growth rate). Surprisingly even with increasing birth costs, which are sunk costs, the total wealth compares favorably. Figure 3b shows decreasing inequality (Gini Coefficient) with increasing mean age. Two very different populations emerge with similar mean ages but significantly different inequality
2
Comparisons of inequality distributions measured with a single ratio have both mathematical [16, 17] and practical difficulties [18, 19]. Equally sized populations are fully sampled to address these difficulties. In addition, ratios only provide a relative measure of wealth so total wealth is included to address this issue.
164
J. C. Stevenson Time to Equilibirum for Low Fertility and Birth Cost Societies (a) Elite and total mean ages
0.8
(b) Population and Gini Coefficients
0.4 Gini Coefficient 0.2
Mean Age (cycles) 400 600 200 0
population Low Fertility Gini Low Fertility population Birth Cost Gini Birth Cost
0
10000
20000 30000 Time (cycles)
40000
0
50000
10000
20000 30000 Time (cycles)
40000
0.0
Mean Age (cycles) 4000 6000 0
2000
0.6
800
8000
top 10% Low Fertility mean Low Fertility top 10% Birth Cost mean Birth Cost
50000
Fig. 1 Relaxation times a Elite (top 10%) and overall mean ages and total wealth for the two constant parameter scenarios. b Population level and Gini Coefficients for the same scenarios History of the Founding Agents’ Wealth for Low Fertility and Birth Cost Societies (b) Founding Agents’ Wealth History for Birth Cost
500
(a) Founding Agents’ Wealth for Low Fertility
Agent ID
Agent ID
60 20
40
Agent Stores (resource)
400 300 200 0
0
100
Agent Stores (resource)
Agent 0 Agents 1−11
80
Agent 0 Agents 1−29
0
5000
15000
25000
35000
0
1000
Time (cycles)
2000
3000
4000
5000
Time (cycles)
Fig. 2 Individual Wealth histories a Wealth history for the first thirty agents of the Low Fertility scenario. b Wealth history for the first twelve agents of the Birth Cost scenario
measures. (This effect can also be seen in Table 1.) Also, as mean age decreases to and below 10, the effects of chaotic population trajectories appear.
2.3 Natural Selection of Growth Parameters By applying natural selection pressures to the infertility and birth cost parameters, the artificial society transitions from a complex system to a complex adaptive sys-
Dynamics of Wealth Inequality in Simple Artificial Societies
165
Total Wealth and Inequality Sensitivities to Mean Age (b) Gini versus Mean Age at Steady State
Total Wealth of Population (resources) 2e+03 5e+03 2e+04
5e+04
0.7
(a) Total Wealth versus Mean Age at Steady State
0.6
Increasing Birth Cost
Increasing Infertility
Gini Coefficient 0.5
Increasing Infertility
Increasing Puberty
0.4
Increasing Puberty
Increasing Birth Cost puberty infertility birth cost
5e+02
puberty infertility birth cost
1
100 1000 10 Mean Age of Population (cycles)
10000
1
10 100 1000 Mean Age of Population (cycles)
10000
Fig. 3 Total wealth and inequality at steady state a The relationship of population’s total wealth to mean age. b The relationship of the Gini coefficient to mean age. Note the strikingly different inequality for the same mean ages Evolution of Infertiilty and Birth Cost Parameters (b) Total wealth and Gini Coefficient versus time
100
200 300 Time (generations)
400
500
2000 500 0
0.0
200 0 0
1000 1500 Total Wealth (resources)
Gini Wealth
0.2
f =10 f 20:100 bc =0 bc 10:90
Gini Ratio 0.4
Population (agents) 400 600
0.6
800
(a) Number of agents with indicated parameter values
0
100
200 300 Time (generations)
400
500
Fig. 4 Natural selection of reproductive parameters a The selection of infertility and birth cost parameters from a uniform initial distribution limited to the stable population level regimes. b The trajectories of the mean ages and total wealth of the society undergoing this natural selection
tem [9]. Since the selection is occurring on the individual level, it’s classified as a within-group selection. Figure 4 shows that from the initial uniform distribution of infertility and birth cost parameters across the broad ranges allowed, evolution quickly and forcefully selects for the highest possible intrinsic growth rate. This selection results in the lowest mean ages, the lowest total wealth and the highest wealth inequality of all the scenarios at steady state. Even with the range of these parameters restricted to the regimes of stable population levels, the evolution of the
166
J. C. Stevenson
agents’ reproductive parameters firmly demonstrate a tragedy of the commons so often associated with with-in group selection. Table 1 provides the total wealth and wealth inequality measures for all the scenarios for comparison, which highlights the carnage of the within-group evolution. The generation of endogenous betweengroup selection, from which cooperation can emerge and the tragedy of the commons avoided, is the next challenge for these simple artificial societies [9, 10, 20, 21].
3 Conclusions Simple societies with equal opportunity environments and equally capable individuals generate complex wealth distributions whose inequalities are dependent on the intrinsic growth rate of the population, the cost of reproduction, and whether the society has reached equilibrium. Some societies never achieve equilibrium in a reasonable time. Determination of the relaxation times of wealth distributions is of interest in modern and complexity economics [22, 23]. The drivers of these inequalities are shown to be much different in the steady state phase than during non-equilibrium transitions. The degree of inequality was shown to be lower and the total wealth higher with slower intrinsic growth rates (lower death rates) at steady state. The inclusion of birth costs made additional contributions to reduced inequality beyond its effects of reducing intrinsic growth while actually increasing the total wealth of the population even though these resources were consumed. It is clear that under most configurations, significant inequalities persist even though agents’ capabilities and resource opportunities are equal. After a population has achieved a true steady state, increasing birth cost and infertility decrease the implied, intrinsic growth rate. As the intrinsic growth rate decreases, mean ages and total wealth increase while mean deaths per cycle and inequality decrease. These relationships suggest that the larger inequalities at steady state are characteristic of short-lived populations driven by high death rates. Increasing birth costs reduce inequality at a much greater rate than increasing infertility. And it is perhaps counter-intuitive that these increasing birth costs do not reduce the total wealth of the population though they represent a sunk cost of wealth. Surprisingly, the Birth Cost simulation with the most equal society also has the largest total wealth. The implications for policy would be significant if these birth cost effects are found in natural societies. The highest levels of inequalities are found in non-equilibrium periods during and after the initial growth phase, when a small number of agents (founders) reproduce slowly into a rich, under-populated landscape. These founders store such significant resources before the population reaches its carry capacity that even after the carry capacity has been reached, the residual surplus resources decay quite slowly, preserving high inequality and preventing equilibrium for time periods orders of magnitude greater than the scale of the initial growth phase. Though these results are for a minimal model of a system, the parallels to the fortunes built at the beginnings of technological and social revolutions are inescapable.
Dynamics of Wealth Inequality in Simple Artificial Societies
167
For these simple forging societies, natural selection drives their reproductive parameters to those values which maximize the individuals’ reproductive rates (intrinsic growth rates). While this selection pressure at the individual level successfully maximizes the society’s intrinsic growth rate, this selection had a devastating effect on the society’s total wealth and stability while generating the highest wealth inequalities and lowest mean ages seen in any of these scenarios at steady state. Truly a tragedy of the commons and one not unfamiliar to human societies. It is surprising and encouraging to see that even from such simple artificial societies with equal agents given equal opportunities, wealth inequalities emerge in dynamic (founders effects), equilibrium (including the out-sized impact of birth costs), and evolving (tragedy of the commons) conditions.
Appendix A—Computational Model and Process A Model Parameters Table 2 provides the definition of the agents’ and landscape’s parameters for this simple model. Vision and movement are along rows and columns only. The two dimensional landscape wraps around the edges (often likened to a torus). Agents are selected for action in random order each cycle. The selected agent moves to the closest visible cell with the most resources with ties resolved randomly. After movement, the agent harvests and consumes (metabolizes) the required resources. At this point, if the agent’s resources are depleted, the agent is removed from the landscape. Otherwise an agent of sufficient age (puberty) then considers reproduction, requiring sufficient resources (birth cost), a lucky roll of the fertility die (infertility), and an empty von Neumann neighboring cell. (The von Nuemann neighborhood consists of only the four neighboring spaces one step away by row or column.) If a birth occurs in a configuration with zero puberty, the newborn is added to the list of agents to be processed in this current cycle. Otherwise (puberty > 0), the newborn is placed in the empty cell and remains inert until the next action cycle. With this approach for the action cycle, no endowments are required either for new births or for the agent(s) at start-up. Once all the agents have cycled through, the landscape replenishes at the growth rate and the cycle ends. One metabolism rate (m), uniform across a given population, is modelled and takes the value that consumes, per cycle, 25% less than the maximum capacity per cell. This value allows the agent to gather more resources than those required by its per cycle metabolism and is called a surplus society in contrast to a subsistence society where the maximum resources in a cell are equal to the metabolism of the agent. And, again for simplicity, the vision and movement characteristics are set to equal values of distance. B Conservation of Resources The calculation of conservation of energy (resources) confirms the validity of the simulation, provides a precise description of the computational process, and provides
168
J. C. Stevenson
Table 2 Agent and landscape parameters of the simple economic model Agent characteristic Notation Value Units Purpose Vision
v
6
Cells
Movement
–
6
Cells per cycle
Metabolism
m
3
Resources per cycle
Birth cost
bc
0,40
Resources
Infertility Puberty
f p
10,85 1
1/probability Cycles
Surplus
S
0+
Resources
Landscape characteristic
Notation
Value
Units
Rows Columns Max capacity Growth
– – R g
50 50 4 1
Initial
R0
4
Cells Cells Resources per cell Resources per cycle per cell Resources, all cells
Vision of resources on landscape Movement about landscape Consumption of resources Sunk cost for reproduction Likelihood of birth Age to start reproduction Storage of resources across cycles
independent measurements of the internal resource flows. Figure 5 details the control volumes used for this analysis of the conservation of resources. The source is growth of resources in the landscape and the sinks are agent metabolism, death, and birth costs (if any). The landscape resource conservation equation can be written as: E L (t) =
Nc
gc (t − 1) − F(t)
(1)
c=1
where E L is the change in total resources of the landscape from the end of the previous cycle t − 1 to the current cycle t, F(t) are the resources foraged by the agents, Nc is the number of cells in the landscape, gc (t − 1) is the resource added to cell c at the end of the previous cycle. gc (t) is given as: gc (t) =
g r [c, t] + g ≤ R R − r [c, t] r [c, t] + g > R
(2)
Dynamics of Wealth Inequality in Simple Artificial Societies
169
Fig. 5 Control volume schematic
where R is the maximum resources in a landscape’s cell, and g is the growth rate of resources in landscape’s cell. The resources foraged F(t) from the landscape by the agents is defined as: F(t) =
A(t−1)
r [c(a), t] + δ p0
a=1
B(t)
r [c(a), t]
(3)
a=1
where a is the agent index, A(t − 1) is the list of agents alive at the end of the previous cycle, B(t) is the list of new agents generated in this cycle, δ p0 is one if the puberty parameter equals 0 and 0 otherwise, c(a) is the cell location of an agent indexed as a, and r [c(a), t] are the resources in the cell occupied by a that are foraged by a at the current time. The conservation equation for the change in resources of the population E P (t) from the previous to the current cycle can now be written as: E P (t) = F(t) −
A(t) a=1
m−
D(t) a=1
B(t) (Sa (t) + m) − [bc − δ p0 m]
(4)
a=1
where A(t) is the list of agents alive, m is the (constant) metabolism, bc is the (constant) birth cost, D(t) is the list of agents that died in this cycle and Sa (t) is the surplus resources of those agents a on list D which have died (Sa (t) < 0) in this cycle so that Sa (t) + m are the (positive) resources lost upon its death.
170
J. C. Stevenson
Appendix B—Single Species Models from Mathematical Biology A continuous homogeneous model of a single species population N (t) was proposed by Verhulst in 1838 [15]: N d N (t) = rN 1 − (5) dt K where K is the steady state carry capacity, t is time, and r is the intrinsic rate of growth. This model represents self-limiting, logistic growth of a population and is used to estimate the intrinsic growth rates r of the population trajectories generated by this simple model. A discrete form of the Verhulst process incorporating an explicit time delay τ in the self-limiting term was proposed by Hutchinson [24] to account for delays seen in animal populations. The resulting discrete-delayed logistic equation, often referred to as the Hutchinson-Wright equation [8], is then r N (t − τ ) N (t) (6) N (t + 1) = 1 + r − K This model’s intrinsic growth rate with τ = 3 captures the steady state, oscillating, and chaotic populations trajectories seen in the simple model with similar intrinsic growth rates. This delay term represents the number of generations a landscape cell
Implied Subsistence Growth Parameters and Discrete Verholst Process Regimes b. Discrete Verhulst process regimes
a. Implied parameters in oscillating/chaotic Regimes
1000
f5p1bc0m4
Population (agents) 400 600
Population (agents) 400 600
800
800
actual fitted model
f10p1bc0m4
200
actual fitted model
0.025 0.47
2 2.7
0
0
200
Intrinsic Growth Rate
0
50
100 Time (cycles)
150
200
0
20
40 Time (cycles)
60
80
Fig. 6 a Implied growth parameters for representative oscillating/chaotic, simple subsistence societies. f5p1bc0m4 is a society with f of 5, puberty of 1, bc of 0, and m of 4. f10p1bc0m4 has a f of 10 and the remaining parameters the same as before. b Representative critical regimes of the discrete Verhulst process for simple surplus societies
Dynamics of Wealth Inequality in Simple Artificial Societies
171
needs to restore its resources to the metabolic requirement of the agents. Figure 6a provides sample population trajectories for the simple model with implied intrinsic growth rates and Fig. 6b displays discrete Verhulst process trajectories at critical intrinsic growth rates. These figures demonstrates the relationship of the various growth regimes to their respective intrinsic growth rates.
References 1. Benhabib, J., Bisin, A., Zhu, S.: The distribution of wealth and fiscal policy in economies with finitely lived agents. Econometrica 79(1), 123–157 (2011) 2. Benisty, H.: Simple Wealth distribution model causing inequality-induced crisis without external shocks. Phys. Rev. E 95, 052307 (2017) 3. Berman, Y., Peters, O., Adamou, A.: Wealth inequality and the ergodic hypothesis: evidence from the United States, Working Paper (2020). https://ssrn.com/abstract=2794830 4. Adamou A., Peters O.: Dynamics of inequality. Significance 13(3), 32–35 (2016); The Royal Statistical Society 5. Bouchaud, J.P., Mezard, M.: Wealth condensation in a simple model of economy (2000). arXiv:cond-matter/0002374v1 6. Roughgarden, J., Bergmen, A., Hafir, S., Taylor, C.: Adaptive computation in ecology and evolution: a guide for future research. Adaptive Individuals in Evolving Populations, SFI Studies in the Sciences of Complexity, vol. XXVI, Addison-Wesley (1996) 7. Gause, G.F.: The Struggle for Existence. Williams and Wilkins (1934) 8. Kot, M.: Elements of Mathematical Ecology. Cambridge University Press, Cambridge (2001) 9. Wilson, D.S.: Two Meanings of Complex Adaptive Systems, Complexity and Evolution, pp. 31–46. MIT Press (2016) 10. Ostrom, E.: Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, Cambridge (1990) 11. Epstein, J.M., Axtell, R.: Growing Artificial Societies: Social Science from the Bottom Up. Brookings Institution Press (1996) 12. Rahman, A., Setayeshi, S., Zafargandi, S.: Wealth adjustment in an artificial society. Iranian J. Electron. Comput. Eng. 81 (2009) 13. Horres, D., Gore, R.: Exploring the similarities between Economic Theory and Artificial Societies, Semantic Scholar Corpus ID: 51684404 (2008) 14. Murray, J.D.: Mathematical Biology. Springer, Berlin (2002) 15. Verhulst, P.F.: Notice sur la loi que la population poursuit dans son accroissement. Corr. Math. et Phys. 10, 113–121 (1838) 16. Fontanari, A., Taleb, N.N., Cirillo, P.: Gini Estimation Under Infinite Variance, Physica A. Statistical Mechanics and Its Applications (2018). https://doi.org/10.1016/j.physa.2018.02. 102 17. Cowel, F.A.: Theil, Inequality, and the Structure of Income Distribution, DARP 67, May 2003 (2003) 18. Weisbrod, J.: Growth, poverty, and inequality dynamics, peter lang AG (2020). www.jstor.org/stable/j.ctv9hjb04.8 19. Sitthiyot, T., Holasut, K.: A simple method for measuring inequality. Palgrave Commun. 6, 112 (2020). https://doi.org/10.1057/s41599-020-0484-6 20. Pepper, J.W., Smuts, B.B.: The evolution of cooperation in an ecological context: an agentbased model. Dynamics in Human and Primate Societies. Oxford University Press, Oxford (1999) 21. Tverskoi, D., Senthilnathan, A., Gavrilets, S.: The dynamics of cooperation, power and inequality in a group-structured society (2021). Accessed 8 Feb. 2021. https://doi.org/10.31235/osf. io/24svr
172
J. C. Stevenson
22. Rosser, Jr J.B.: On the complexities of complex economic dynamics. J. Econ. Perp. 13, 4 (1999) 23. Wilson, D.S., Kirman, A.: Complexity and Evolution: Toward and New Synthesis for Economics. MIT Press, London England (2016) 24. Hutchinson, G.E.: Circular causal systems in ecology. Ann. New York Acad. Sci. 50, 221–248 (1948). https://doi.org/10.1111/j.1749-6632
Social Simulation and Games
Cruising Drivers’ Response to Changes in Parking Prices in a Serious Game Sharon Geva and Eran Ben-Elia
Abstract Scarcity of on-street parking in cities centers is a known factor motivating drivers to drive slowly (“to cruise”) while searching for an available parking place and is associated with negative externalities e.g., congestion, accidents, fuel waste and air pollution. Finding the correct prices is suggested to bring cruising to a sustainable level. However, current research methods based on surveys and simulations fail to provide a full understanding of drivers’ cruising preference and their behavioral response to price changes. We used the PARKGAME serious game, which provides a real-world abstraction of the dynamic cruising experience. Eightythree players participated in an experiment under two pricing scenarios. Pricing was spatially designed as “price rings”, decreasing when receding from the desired destination point. Based on the data, we analyzed search time, parking distance, parking location choice and spatial searching patterns. We show that such a pricing policy may substantially reduce the cruising problem, motivating drivers to park earlier— further away from the destination or in the lot, especially when occupancy levels are extremely high. We further discuss the policy implications of these findings. Keywords Serious games · Cruising · Driver behavior · Parking search
1 Introduction The scarcity of on-street parking places in cities centers motivates drivers to drive slowly (i.e., cruising) while they are searching for a vacant on-street parking place. Cruising is also associated with negative externalities such as air pollution, road accidents and fuel waste and exacerbates congestion [1, 2]. The cruising phenomenon occurs due to underpriced on-street parking places relative to the parking lot [2]. Price differences spur some drivers to waste time S. Geva · E. Ben-Elia (B) Ben-Gurion University of the Negev, Be’er Sheva, Israel e-mail: [email protected] S. Geva e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_14
175
176
S. Geva and E. Ben-Elia
on a risky search for an on-street parking place rather than park immediately in a certain (but usually more expensive) one in the lot. Thus, establishing a cruising reduction policy is important. However, to accomplish this mission better understanding of drivers’ parking search preferences and behavior is required. Studies that sought to understand drivers’ preferences used mainly either static surveys based on stated preferences (SP) and revealed preferences (RP) or agent-based simulation models (ABM) that do not directly involve real drivers. As further elaborated in Sect. 2, although these methods provided relevant insights into drivers’ preferences, major gaps remain and several disadvantages have emerged, particularly regarding the dynamic and spatially-explicit parking choice environment cruising drivers face [3, 4]. Our goal, therefore, is to study drivers’ behavior using a different approach that overcomes the aforementioned issues. One such method is that of Serious Games (SG). We used the PARKGAME SG to test and model participants’ reactions and choices under unique on-street price distributions. Based on the experimental data, we present the analysis of search time and parking distance. The current method provides a fuller understanding of drivers’ behavior and preferences in the urban parking context, and helps to formulate parking pricing policies that will be capable of efficiently reducing unnecessary cruising. The rest of the paper is organized as follows: Sect. 2 reviews the state-of-the-art related to the cruising problem, the methodological gaps and potential of SGs; Sect. 3 presents the methodology of our study; Sect. 4 presents the results of the experiments; Sect. 5 presents a discussion of the results, conclusions and future research directions.
2 Literature Review 2.1 The Cruising Problem Drivers demanding to park in city centers usually choose between two options—onstreet (by the curb) or off street (in a lot) [5]. Parking in the lot is usually immediate but often more expensive than an on-street parking place, which is usually cheaper but may require a longer search time (i.e., cruising). According to Shoup [2], the lack of on-street parking places motivates drivers to cruise (i.e. drive slower) in order to find a vacant place and contributes to increasing traffic congestion and is also associated with more externalities e.g. road accidents, fuel waste, air pollution and damage to the pedestrian environment [1, 2]. To reduce car use and parking demand in central areas, developing a sustainable parking policy is mandatory. Such a policy is highly dependent on understanding drivers’ behavioral response to parking choice attributes, particularly the parking cost [6]. Cruising begins when the parking occupancy rises, and on-street parking possibilities decrease. Inci et al. [7] assert that when on-street parking occupancy exceeds 85%, drivers fail to park where they wanted to, which encourages cruising and
Cruising Drivers’ Response to Changes in Parking Prices …
177
contributes to traffic congestion. San Francisco’s SFpark pilot was conducted to understand whether parking prices change would lead to decrease in cruising by changing on-street parking prices when occupancy is higher than 80% and lower than 60%, respectively) [8]. It was estimated that this pricing policy resulted in a 50% decrease in cruising. search time decreased by 15% and search distance reduced by 12% [9]. Implementing such a policy depends on extensive deployment of expensive monitoring technology, but the impressive results suggested that even without it similar results may well be achieved. Yan et al. [6] found that parking pricing was the most effective component in changing driver behavior in a university campus experiment where drivers’ could buy an annual subscription for parking in a remote and cheaper lot or a more expensive one nearby. However, as only university lots were investigated, to understand if pricing can reduce cruising, it is worthwhile to examine the potential implementation of such a pricing policy in city centers, where parking demand is significantly high.
2.2 Estimating Drivers’ Behavior Cruising is related to drivers’ parking preferences. Parking search behavior depends on different attributes, such as available facilities, occupation rate, prices and expected dwelling time [10]. When a driver does not find a vacant place near the destination, she has a few options to choose from: continue cruising around the destination,cruise further away where the demand is lower; to park in a parking lot. Either way, there will be a time cost [1]. Studies conducted between 1927 and 2001 found that the average search time ranged from 3.5 to 14 min [2]. Similarly, Hampshire et al. [11] analyzed through CCTV and GPS records and found that drivers were willing to cruise up to 10 min. According to Benenson et al. [12], and Levy et al. [13], drivers tend to search for parking within a radius of 100–400 m from the destination, and the maximum length of a cruising route will be 3 km [11]. These cruising characteristics elucidate the importance of understanding drivers’ behavior better, in order to deal with this phenomenon. Studies often use SP, RP and ABM to estimate drivers’ preferences and responses to parking policies. When using SP, there is a risk of hypothetical bias, when the respondents overestimate or underestimate due to the lack of concreteness in their decisions (Brownstone and Small, [14]. In choice experiments, there is a known gap between making a decision based on a description or on experience. The problem with description-based decisions is that participants tend to overestimate the possibility that rare scenarios will happen. In contrast, when participants experience a rare event among several, they tend to underestimate such cases [15]. Another disadvantage of research based only on observations is that background noises such as weather, noise, and smells cannot be isolated [16], and similar scenarios cannot be reproduced and compared [17]. Brooke et al. [3] note the advantages of SP surveys, as the ability to focus on certain characteristics or understand what is important to drivers. Similarly, RP surveys
178
S. Geva and E. Ben-Elia
are relatively short ( 0.05)
73 (p < 0.1)
67 (p > 0.05)
Ring 3
188 (p < 0.001)
78 (p > 0.05)
108 (p > 0.05)
97 (p > 0.05)
Ring 4
149 (p < 0.001)
108 (p > 0.05)
172 (p < 0.001)
111 (p < 0.01)
Lot
249 (p < 0.001)
196 (p < 0.01)
179 (p < 0.001)
180 (p < 0.05)
Log-likelihood
−3158.2
28 (p < 0.05)
Cruising Drivers’ Response to Changes in Parking Prices …
183
to the lot without arriving too late to the meeting. Thus, while some participants succeeded in parking at those rings within a short time, others parked there on the way back to the lot after a much longer time. The remaining insignificant variables are in the OH condition (rings 2-3-4 in scenario 1 and 2-3 in scenario 2), which is probably due to participants finding a vacant place much quicker in this condition. Evidently, search times are significantly different in the various rings: for participants who parked farther from the destination, search times were significantly shorter than those parking close to the destination for all conditions. Based on this result, we can assert that the occupancy level had little effect on search time in the first ring, while in the other rings, OH in both scenarios, contributed to a reduction in search times (as the probability to find a vacant place increased). The price effect is elucidated by the between scenario differences in search times for participants who eventually parked in the lot. In scenario 2, lot search time was significantly shorter than scenario 1. Considering the meeting time (190 s from the start of the game), higher prices in scenario 2 clearly influenced participants’ behavior, i.e., cruising for less time and eventually parking at the lot without being late for the meeting. The participants apparently understood that if they continue searching and arrive too late for the meeting, they will pay a hefty fine, which likely encouraged them to park in the lot in advance of the meeting. In contrast, when the on-street prices were lower (scenario 1), participants were willing to cruise for longer, with some hoping to succeed parking in a cheaper on-street vacant place. Lower prices motivated them to park at the lot, but only after giving up on finding on-street parking, at the cost of a much longer search time compared to scenario 2. While the analysis above highlights how price and occupancy influence the participants’ choices in search time context, the use of spatial tools allows to understand if particular spatial patterns were created. Using the Getis-Ord tool, “hot” and “cold” spots of spatial phenomena can be obtained [24]. Figure 2 show the products of this tool for identifying long and short search times. As can be seen, in Scenario 1, short search times are concentrated on the starting point and destination axis, especially near the starting point. Long search times are concentrated around the destination, in the most expensive ring. Evidently, under S1OH, the short search time area expanded slightly east and west. In contrast, in scenario 2, the difference both in short and long search times is well noticed. Although the short search times remain between the starting point and the destination, they are more scattered to the east and west. The long search time zone moves away from the destination area westward in S2OE and northwards in S2OH. This spatial comparison makes it possible to notice the effect of the high prices in scenario 2 on search patterns, which cause the participants to park relatively quickly over a wider area while spending a relatively longer time searching farther from the destination.
184
S. Geva and E. Ben-Elia
Fig. 2 The Getis-Ord products, hot and cold spot search time areas in scenario 1 and 2 by extreme occupancy and high occupancy: S1OE: top right; S1OH: top left; S2OE: bottom right; S2OH: bottom left
4.2 On-Street Distance from the Destination We fitted a GLMM to estimate how distance depends on the price and occupancy levels, and their interaction (Table 2). The estimates of S1OE, S2OE and S1OH were significant while S2OH was not. Accordingly, we can assert that when the price level is lower, participants park closer to the destination, while if the occupancy level is lower, the probability of finding a vacant place increases, allowing succeed in parking somewhat farther. In contrast, the estimated distance in S2OH was not significant, which can be attributed to the distribution of on-street parking places by prices in this condition, which is practically uniform (χ2 = 0.09, df = 3, p > 0.05). The results of the model indicate a strong price effect on the distance from the destination, and highlight the higher possibility to find a vacant place in the high occupancy condition. Figure 3 shows the Standard Deviation Ellipse, which summarizes the spatial characteristics of geographic features (i.e. the selected parking locations in each condition in both scenarios) [24]. It can also be seen visually, that in each scenario Table 2 Estimated on-street parking distance according the GLMM results. Results divided by the occupancy level and scenario
Mean distance estimate Variable
Coefficient
Sig
S1OE
163.04
p < 0.001
S1OH
189.08
p < 0.001
S2OE
185.78
p < 0.05
S2OH
228.18
p > 0.05
Log-likelihood
−2435.6
Cruising Drivers’ Response to Changes in Parking Prices …
185
Fig. 3 The Standard Deviation Ellipse products of the extreme (red) and high (black) occupancies by scenario. Solid lines represent scenario 1, and dashed lines scenario 2
in OH, participants parked farther from the destination compared to OE. In addition, this movement is orientated mainly towards the starting point. These remote parking locations are clearly seen in scenario 2, especially in OH condition. Parking locations shifted from the destination towards the starting point, indicating that such a pricing policy may well encourage drivers to start searching for a vacant place farther away from the destination. According to the search time GLMM results (Table 1), the overall search time could be significantly shortened if drivers indeed would search further away. In order to understand whether gender had some influence on the participants’ choices, we used Mann-Whitney tests to compare the results of males and females. We tested the differences for search time and distance from the destination in each condition (i.e. each occupancy separately), and no significant differences were found in any of the conditions.
5 Discussion The current research examined how prices and occupancy levels affect drivers’ behavior using a game-based approach. Following a set of training games, participants navigating a grid-like network succeeded in learning: (1) where to search for a vacant on-street parking place; (2) how much time to spend on cruising; (3) where to park the car. Based on the data obtained from these games, we analyzed how price and occupancy influenced search time and on-street parking distance from the destination. We can assert that while occupancy influences whether drivers succeeded
186
S. Geva and E. Ben-Elia
to park on-street—something extremely difficult under the extreme level compared to the high occupancy—the price level had significant effects on lot and on-street search time, and on the distance of on-street parking from the destination as shown in the GLMM models. As mentioned, the analysis of participants’ behavior during the experiment reveals important insights about drivers’ cruising behavior and changes in parking preferences. Generally, we showed that if the on-street prices rise and become similar to those of the lot, drivers will park the car in the lot within a substantially shorter search time. In this case, the total search time will also decrease. Accordingly, we can assert that increasing of on-street parking prices to the lot’s price will reduce cruising as well. This result is consistent with the literature [2], according to which equating on-street and lot prices can diminish cruising. More specifically, the pricing policy we established in the current research has some practical implications. If the prices of the area nearest to the area of highest demand are similar to those of the lot, drivers will be willing to park the car farther away and more quickly, where on-street prices are lower, or within a shorter search time in the lot. The revealed behaviors reflect on the sensitivity of drivers to prices and is compatible to the findings of Yan et al. [6], who assert that price changes can cause drivers to change their parking behavior. In such situations, the total search time decreases, particularly when drivers park near the point where they start to search for a vacant place, as the participants did in scenario 2 (Fig. 3). This assertion corresponds to Millard-Ball et al. [35], who found that search time of on-street parking decreased if the parking place is near the starting search point of the driver. This finding contrasts with the situation of underpriced on-street, like scenario 1, where lower prices lead to longer cruises around the destination. Drivers’ displayed a clear preference to search for a vacant on-street place (and park) near the destination when the price there was significantly lower than the lot (as scenario 1)— behavior consistent with the literature about drivers’ sensitivity to prices [1]. As for the effect of occupancy, we note that the rate of the lot and on-street parkers in both scenarios were similar in both occupancy conditions. This result points out that drivers prefer to park on-street as long as the prices are lower than the lot. However, as noted above, participants’ choices in the experiment clearly elucidate drivers’ preferences—park faster in the lot or further away if on-street prices are high, or cruise for a long time and close to the destination if on-street prices are low. In this sense, we can assert that to demotivate cruising when on-street occupancy is very high, higher on-street prices are sufficient to achieve a sustainable parking policy in inner city areas. Note that we investigated high and extreme levels of occupancy which reflect on the scarcity of on street parking and on the cruising problem. Otherwise, parking on street is relatively easy and straightforward. According to the written above, we can assert that our hypotheses were accepted, i.e. in the second scenario search time decreased (H1) and the distance from the destination increased (H2).
Cruising Drivers’ Response to Changes in Parking Prices …
187
5.1 Limitations The current research has some limitations which future research can consider. First, the homogeneity of the research area (i.e., the grid on which the experiment was preformed). This is a simplification of reality which made it possible to separate between navigation (i.e., route choice) and cruising (parking choice). Future research can consider heterogenous layouts where the gamified network mimics real streets. It is possible that the behaviors where network layout requires more careful routing (e.g., one-way streets or turn prohibition) will be more accentuated (e.g., less time to cruise and more parking in the lot) compared to those observed in our experiment. Therefore, our results can be considered a lower benchmark for expected driver behavior. Fulman and Benenson [32] showed using an ABM that the spatial distribution can influence cruising where centralization of land use will result in more acute cruising phenomena. Second, we tested cruising considering only one parking lot located on the destination point. In most cases, lots are spread out in different locations, and drivers can choose in which to park. Fulman et al. [32] showed that distance between lot and destination can affect the cruising behavior—increasing the search path (SDE). Future research could consider including lots at different rings reflecting on different prices. Third, the results of our research regarding sensitivity to pricing and the interaction with occupancy can also be incorporated in the future in an ABM. That is based on our models elasticities to price and occupancy can be applied. Fourth, the game produced substantial amounts of data that could be used to estimate a dynamic discrete choice model.
5.2 Closing Remarks The current research examined drivers’ behavior regarding parking preferences, in order to reduce cruising in demanded areas using the PARKGAME SG as a realistic compromise between static SP/RP surveys and abstract ABM. Our results raise two possible routes to succeed in reducing the cruising phenomenon. (1) Motivate drivers to park at the lot earlier; (2) Motivate drivers to park on-street but farther away from high demanded areas, which will also happen earlier in order to avoid lateness. To encourage drivers to park earlier in the lot, increasing of on-street prices close to an adjacent lot is helpful, and applying a pricing ring policy decreasing with distance is recommended. Naturally, in this research we avoided discussing “the elephant in the room”—namely whether drivers should be dissuaded to avoid driving all together by using sustainable transport modes such as public transport (PT). Notwithstanding, PT as another option to avoid cruising could be applied in the future as part of a SG and the conditions when drivers will choose quitting all together instead of attempting to park could be investigated in conjunction with pricing policies and occupancy conditions.
188
S. Geva and E. Ben-Elia
References 1. Inci, E.: A review of the economics of parking. Econ. Transp. 4(1–2), 50–63 (2015). https:// doi.org/10.1016/j.ecotra.2014.11.001 2. Shoup, D.C.: Cruising for parking. Transp. Policy 13(6), 479–486 (2006). https://doi.org/10. 1016/j.tranpol.2006.05.005 3. Brooke, S., Ison, S., Quddus, M.: On-street parking search: review and future research direction. Transp. Res. Rec. 2469(2469), 65–75 (2014). https://doi.org/10.3141/2469-08 4. Lehner, S., Peer, S.: The price elasticity of parking: A meta-analysis. Transp. Res. Part A Policy Pract. 121, 177–191 (2019). https://doi.org/10.1016/j.tra.2019.01.014 5. Inci, E., Lindsey, R.: Garage and curbside parking competition with search congestion. Reg. Sci. Urban Econ. 54, 49–59 (2015). https://doi.org/10.1016/j.regsciurbeco.2015.07.003 6. Yan, X., Levine, J., Marans, R.: The effectiveness of parking policies to reduce parking demand pressure and car use. Transp Policy 73, 41–50 (2018). https://doi.org/10.1016/j.tranpol.2018. 10.009 7. Inci, E., van Ommeren, J., Kobus, M.: The external cruising costs of parking. J. Econ. Geogr. 17(6), 1301–1323 (2017). https://doi.org/10.1093/jeg/lbx004 8. Millard-ball, A., Weinberger, R.R., Hampshire, R.C.: Is the curb 80 % full or 20 % empty? Assessing the impacts of San Francisco’s parking pricing experiment. Transp. Res. Part A 63, 76–92 (2014). https://doi.org/10.1016/j.tra.2014.02.016 9. Alemi, F., Rodier, C., Drake, C.: Cruising and on-street parking pricing: a difference-indifference analysis of measured parking search time and distance in San Francisco. Transp. Res. Part A Policy Pract. 111, 187–198 (2018). https://doi.org/10.1016/j.tra.2018.03.007 10. Polak, J., Axhausen, K.W.: Parking search behaviour: a review of current research and future prospects. Transport Studies Units, Working Paper, 540 (1990) 11. Hampshire, R.C., Jordon, D., Akinbola, O., Richardson, K., Weinberger, R., Millard-Ball, A., Karlin-Resnik, J.: Analysis of parking search behavior with video from naturalistic driving. Transp. Res. Rec. 2543, 152–158 (2016). https://doi.org/10.3141/2543-18 12. Benenson, I., Martens, K., Birfir, S.: PARKAGENT: an agent-based model of parking in the city. Comput. Environ. Urban Syst. 32(6), 431–439 (2008). https://doi.org/10.1016/j.compen vurbsys.2008.09.011 13. Levy, N., Martens, K., Benenson, I.: Exploring cruising using agent-based and analytical models of parking. Transp. A Transport Sci. 9(9), 773–797 (2013). https://doi.org/10.1080/18128602. 2012.664575 14. Brownstone, D., Small, K.A.: Valuing time and reliability: assessing the evidence from road pricing demonstrations. Transp. Res. Part A Policy Pract. 39(4), 279–293 (2005). https://doi. org/10.1016/j.tra.2004.11.001 15. Hertwig, R., Erev, I.: The description—experience gap in risky choice. Trends Cogn. Sci. 13(12), 517–523 (2009). https://doi.org/10.1016/j.tics.2009.09.004 16. Mol, J.M.: Goggles in the lab: economic experiments in immersive virtual environments. J. Behav. Exp. Econ. 79, 155–164 (2019). https://doi.org/10.1016/j.socec.2019.02.007 17. Fox, J., Bailenson, J.N.: Virtual self-modeling: the effects of vicarious reinforcement and identification on exercise behaviors. Media Psychol. 12(1), 1–25 (2009). https://doi.org/10. 1080/15213260802669474 18. Chaniotakis, E., Pel, A.J.: Drivers’ parking location choice under uncertain parking availability and search times: a stated preference experiment. Transp. Res. Part A Policy Pract. 82, 228–239 (2015). https://doi.org/10.1016/j.tra.2015.10.004 19. Golias, J., Yannis, G., Harvatis, M.: Off-street parking choice sensitivity. Transp. Plan. Technol. 25(4), 333–348 (2002). https://doi.org/10.1080/0308106022000019620 20. Simi, J., Vukanovi, S., Milosavljevi, N.: The effect of parking charges and time limit to car usage and parking behaviour. Transp. Policy 30, 125–131 (2013). https://doi.org/10.1016/j.tra npol.2013.09.007
Cruising Drivers’ Response to Changes in Parking Prices …
189
21. Ma, X., Sun, X., He, Y., Chen, Y.: Parking choice behavior investigation: a case study at Beijing Lama Temple. Proc. Soc. Behav. Sci. 96, 2635–2642 (2013). https://doi.org/10.1016/j.sbspro. 2013.08.294 22. Klein, I., Ben-elia, E.: Emergence of cooperation in congested road networks using ICT and future and emerging technologies : a game-based review. Transp. Res. Part C 72, 10–28 (2016). https://doi.org/10.1016/j.trc.2016.09.005 23. Salen, K., Tekinba¸s, K.S., Zimmerman, E.: Rules of play: Game Design Fundamentals. MIT Press (2004) 24. Mitchell, A.: The ESRI Guide to GIS Analysis (Volume 2). CA, Esri Press, Redlands (2005) 25. Erev, I., Ert, E., Roth, A.E.: A choice prediction competition for market entry games: an introduction. Games 1(2), 117–136 (2010). https://doi.org/10.3390/g1020117 26. Rapoport, A., Kugler, T., Dugar, S.: Choice of routes in congested traffic networks: experimental tests of the Braess Paradox. Games Econom. Behav. 65(2), 538–571 (2009). https://doi.org/10. 1016/j.geb.2008.02.007 27. Miller, H.J.: Beyond sharing : cultivating cooperative transportation systems through geographic information science. J. Transp. Geogr. 31, 296–308 (2013). https://doi.org/10.1016/ j.jtrangeo.2013.04.007 28. Card, S.K., Mackilay, J.D., Shneiderman, B.: Reading in information visualization using vision to think. Morgan Kaufmann (1999) 29. Zyda, M.: From visual simulation to virtual reality to games. Computer 38(9), 25–32 (2005). https://doi.org/10.1109/MC.2005.297 30. Zichermann, G., Cunningham, C.: Gamification by Design. O’Reilly Media, Inc. (2011) 31. Peters, V., Vissers, G., Heijne, G.: The validity of games. Simul. Gaming 29(1), 20–30 (1998). https://doi.org/10.1177/1046878198291003 32. Fulman, N., Benenson, I., Ben Elia, E.: Modeling parking search behavior in the city center: a game-based approach. Transp. Res. Part C Emerg. Technol. 120(September),(2020). https:// doi.org/10.1016/j.trc.2020.102800 33. Carrese, S., Negrenti, E., Belles, B.B.: Simulation of the parking phase for urban traffic emission models. In: TRISTAN V-Triennial Symposium on Transportation Analysis, Guadeloupe (2004) 34. Greiner, B.: Subject pool recruitment procedures: organizing experiments with ORSEE. J. Econ. Sci. Assoc. 1(1), 114–125 (2015). https://doi.org/10.1007/s40881-015-0004-4 35. Millard-Ball, A., Hampshire, R.C., Weinberger, R.: Parking behaviour: the curious lack of cruising for parking in San Francisco. Land Use Policy 91,(2020). https://doi.org/10.1016/j.lan dusepol.2019.03.031
Quantum Leaper: A Methodology Journey From a Model in NetLogo to a Game in Unity Timo Szczepanska , Andreas Angourakis , Shawn Graham , and Melania Borit
Abstract Combining Games and Agent-Based Models (ABMs) in a single research design (i.e. GAM design) shows potential for investigating complex past, present, or future social phenomena. Games offer engaging environments that can help generating insights into social dynamics, perceptions, and behaviours, while ABMs support the representation and analysis of complexity. We present here the first attempt to “discipline” the interdisciplinary endeavour of developing a GAM design in which an ABM is transformed into a game, thus the two becoming intertwined in one application. When doing this, we use as a GAM design exemplar the process of developing Quantum Leaper, a proof-of-concept video game made in Unity software and based on the NetLogo implementation of the well known “Artificial Anasazi” ABM. This study aims to consolidate the methodology component of the GAM field by proposing the GAM Reflection Framework, a tool that can be used by GAM practitioners, ABM modellers, or game designers looking for methodological guidance with developing an agent-based model that is a game (i.e. an agent-based game). Keywords Agent-based model · Archaeology · Framework · Game · Game design · Interdisciplinarity · Methodology · Reflection
1 Introduction GAM, combining Games and Agent-Based Models (ABMs) in a single research design, is a unique way to investigate complex past, present, or future social phenomena. Using GAM, researchers benefit from the individual strengths of Games T. Szczepanska · M. Borit (B) Norwegian College of Fishery Science, UiT The Arctic University of Norway, Tromsø, Norway e-mail: [email protected] A. Angourakis McDonald Institute for Archaeological Research, University of Cambridge, Cambridge, UK S. Graham Department of History, Carleton University, Ottawa, Canada © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_15
191
192
T. Szczepanska et al.
and ABMs and their synergistic effect. Games offer engaging environments to generate insights into social dynamics, perceptions, and behaviours, while ABMs support complexity analysis. The GAM field is relatively new and only a few methodological descriptions are available for those who want to venture into it. A recent Systematic Literature Review of the GAM field [15] provides a general description of six research designs to combine Games and ABMs. The designs are organised in two groups: (1) sequential combinations over time, either from a Game to an ABM or from an ABM to a Game, and (2) simultaneous combinations, where either the ABM provides support to the Game (e.g., to calculate the effect of player actions on their environment) or the ABM and the Game are merged into one integrated application. There is no systematic method description of how to implement this latter design (i.e. GAM type 6 ABM = GAME), which leads to practitioners and newcomers in this field still to largely rely on intuition and on ad hoc solutions (own creations or imitations). This study contributes to filling up this gap and proposes a way to “discipline” the process of transforming an ABM into a Game (i.e. an agent-based game). This study can be used by GAM practitioners who want to increase the learning value of their practices as well as the rigour and transparency of these. Coordinators of research using GAM designs, ABM modellers or game designers can use this study as guidance to structure the collaborative work in interdisciplinary teams that use these designs. This study uses Quantum Leaper (QL) as an exemplar of GAM type 6 ABM = GAME. QL is a proof-of-concept video game made in Unity and based on the NetLogo implementation of the known “Artificial Anasazi” (AA) ABM. Here we present an overview of the QL development, highlighting the main design steps. Starting from these steps we propose a high-level reflection framework that integrates conceptual thinking from interdisciplinarity, ABM development, and game design, i.e. the GAM Reflection Framework. To capture the journey of making this framework, we used a storytelling approach to structure the remainder of this paper in a conclusive narrative.
2 The Settings: The Backdrop and Environment for the Story The setting is the time and the location in which a story takes place. This setting can be very specific, but can also be more broad. In the case of our story, the setting was a foggy place. We reviewed the literature for general methodological advice about how to develop a GAM design of type 6 ABM = GAME, but were not able to find such descriptions. However, we found several individual examples of GAM studies (e.g. [7, 11]). In order to be able to orientate ourselves through the fog, we decided to use an exemplar of GAM study as our focal point and selected the Quantum Leaper video game for this role (more details about how this decision was taken are given in Sect. 3). From there on, we looked into studies similar to QL, but found only projects
Quantum Leaper: A Methodology Journey From a Model in NetLogo …
193
where a 3D game interface was used to visualise and query the output of ABMs depicting historical populations (e.g. [3, 16]). After coming to this understanding, the nature of the settings of our story came clearly in sight: we were exploring an uncharted domain, but we had a starting point.
3 The Characters: Their Role and Purpose A story usually includes a number of characters, each with a different role and purpose, and there is almost always a protagonist and an antagonist or obstacle to overcome. In our story, there were four protagonists, each with their unique set of tools. Some of these tools can be seen as characters on their own right, having the role of deuteragonist, i.e. the constant companion to the protagonist during the journey. The interest in agent-based modelling and games was a common characteristic of the four protagonists. Otherwise, two of these were digital archaeologists and the other two were active in the natural resource management field. The latter were on a quest of disentangling the methodological intricacies of using games and ABMs as a research device for sustainable resource management when they came across the Quantum Leaper. Interested in connecting with the social simulation community, the QL designers joined the quest. While the antagonist in this story is the difficulty of the disentangling process, the constant companions to the protagonists during their journey were the QL and the “Artificial Anasazi” ABM. QL was used as an (almost) ideal specimen of GAM design type 6 (i.e. an exemplar), as it clearly displayed an agent-based game: an ABM, a game, and the interconnection between the two. The possibility of directly working with its designers had the potential of making explicit the implicit decision-making processes of creating this agent-based game. Quantum Leaper. QL, a side-project of two of the co-authors of this study, was initiated in 2017 and it was conceptualised as an experiment to embed ABMs into immersive video games, particularly considering the potential of such integrated approach for archaeology. It aimed to demonstrate that ‘playing’ ABMs immersively can reveal new insights about both the model and the system represented. Even though unfinished, QL was presented publicly on several occasions [2], raising the interest of a wide and diverse public, ranging from archaeologists to game designers. QL is based on the NetLogo implementation of the known “Artificial Anasazi” ABM. For more details, see the part of the development files in [1] and Chap. 5 in [6]. The “Artificial Anasazi” (AA) ABM. AA received great attention because of its implications for the socio-ecological resilience in front of climate change. It represents the population dynamics in the Long House Valley in Arizona (USA), between 800 and 1350 AD [4]. Archaeological data shows that the valley was abandoned towards the end of this period and the main hypothesis put forward pointed to climate change as the main cause. To address this and other hypotheses, the model relates a population of households with a simplified maize-based food economy, dependent on soil types and changing humidity conditions. Simulations are evalu-
194
T. Szczepanska et al.
ated in reference to the historical estimations of population size and distribution per year. The original authors interpreted the results as indicative that climate change alone was not sufficient to explain the abandonment of the valley. The model was first implemented in Ascape, which is now virtually inaccessible, but was later implemented in NetLogo [19] and published in two quasi-equivalent versions: Janssen’s [9, 10] and NetLogo Model Library’s [14].
4 The Journey: The Travel From an ABM to a Game The QL project was organised into three work packages whose tasks intertwined: (A) ABM replication, adaptation, and extension, (B) Game conceptual design, and (C) Game development. A. ABM replication, adaptation, and extension Both versions of AA (implemented in NetLogo 5.3.1) were reviewed and translated to C#, the primary language used for scripting in Unity, a popular cross-platform game engine. The alternative of running NetLogo from a C# script in Unity was considered, but discarded due to its technical complexity and potential licensing issues (i.e., releasing a copy of NetLogo together with the game). Bringing AA to C# and Unity involved these tasks. A1. Creation of a C# library for ABM. C# is a general-purpose, object-oriented language (i.e., not specialised in ABMs). It has little resemblance to NetLogo’s syntax, lacking most of its key primitives (e.g., the ask command). Thus, the first, and necessary, step in translating the model was the creation of a C# library implementing those NetLogo built-in features used in AA. A2. Code revision and modification in NetLogo. Code reviewing was guided and complemented with related publications scattered over the last thirty years, including the work done more recently in expanding the original model. By studying the model in detail and translating the NetLogo code line-by-line, the QL development team soon encountered a few issues that had to be addressed before moving to C#. These included the following. Spatial input data. The files accompanying both NetLogo implementations (e.g., water.txt, settlements.txt) included impossible coordinates for a few “water points” and historical settlements. Given that this issue has a minimal impact on aggregate behaviour and the original raw data is hardly traceable, it was decided to exclude these data entries. Scheduling and data time-series. The model scheduling was found to be shifted in respect to the palaeoenvironmental time-series data (e.g., adjustedPDSI.txt, environment.txt, water.txt), which regulate agricultural productivity in each year/location in the model. For instance, the data corresponding to the first year (800 AD) is used twice, during the setup and go procedures. The issue was solved by counting setup as the first year and updating the year counter at the start of time steps rather than at the end.
Quantum Leaper: A Methodology Journey From a Model in NetLogo … 0
195 123
individuals
200 150 100 50 0 0
200
400
0
200
400
years trajectory
historical
simulation (original)
simulation (reviewed)
Fig. 1 Difference in trend after the revision of the Artificial Anasazi model in NetLogo. Simulations under two random seeds, 0 and 123, are given as examples
Inheritance of maize stock. In both NetLogo versions, the inheritance of maize stock, happening during household fission, was not functioning as the modellers (presumably) intended. When “fissioning”, the parent household discounts a certain amount of stock, as determined by a parameter (maize-gift-to-child, in NetLogo library’s version). However, the child household receives a different amount, completely unrelated to the parent’s stock. This was corrected by stating a perfect equivalence between the amounts discounted in the parent’s stock and received by the child’s. After these corrections were made, still within NetLogo, AA produced system trajectories that were already quite different from the originals (Fig. 1). A3. Model adaptation and extension. After reviewing the code and consolidating the game concept (i.e., immersive, first-person, see Sect. 4), it became clear that the original model had to be further modified. These corrections and modifications made the simulation runs to display more path-dependent trajectories, as the success of new households was closely related to the previous success of the parent household. These are the most important changes. Break up household population into individual members. Households are the atomic units of the AA model. These were modelled as if they were asexual organisms that are born out of a parent organism, give birth to other child organisms, and eventually die of starvation or old age. A household fitness at any given year depends on its stock of food (cultivated maize), the consumption rate per person-year, and the number of people inside. Under this design, the population of a household, e.g., five people, will appear from thin air in a given year (a household is born), generate new fully-populated households under certain conditions (household fission), and then disappear after a certain number of years (household death). However, this conceptualisation was considered an obstacle for designing an immersive game in first-person perspective. The solution was to expand the model by adding a “character” or “person” dimension within households. These characters are not
196
T. Szczepanska et al.
proper agents; they are accounted in array variables inside each household agent (e.g., age = 34, 25, 7, 5, 1, indicate the ages of the five individual members in a household). Characters are the ones being born, having children, and eventually dying, while a household will only “die” if there are no characters inside. On game development, this modification was combined with a controlled random number generator to allow believable characters (with name, age, sex, lineage, etc.) to be tracked in time and space. Convert constants and parameters. Several household parameters were re-interpreted as parametric or emergent distributions of household members variables. For instance, the consumption of maize per capita, previously applied to all households as a global parameter, became a set of parameters defining a probability distribution from which to draw values for each individual. The QL version of the model is consequently more stochastic, has fewer global parameters, and is less affected by specific parameter settings. B. Game conceptual design The main game concept is inspired by the NBC science-fiction television series Quantum leap (1989–1993). Thus, the player is an archaeologist from the future involved in an experimental technique that allows consciousness to time-travel. An accident happens and the player’s consciousness travels to the past, involuntarily replacing the consciousness of a person that lived in the Ancestral Puebloan culture, formally called Anasazi, in the Long House Valley (Arizona, USA) between 800 and 1350 AD. When this happens, the course of history changes. In order to come back to the present (i.e., finish the game), the player has the task to match the games’ course to the historical development (increasing a convergence score). This can be done by incarnating in individuals, immersing into their biographies, and influencing the behaviour of immediate peers through dialogue and social interaction. This combination of context and mechanics was considered the best solution for making the agent-level perspective compatible with immersive gameplay, given the centuries-long scale of simulations. The game flow is represented in Fig. 2. C. Game development
Fig. 2 Quantum Leaper game flow
Quantum Leaper: A Methodology Journey From a Model in NetLogo …
197
Fig. 3 Prototype in-game screenshots of the Incarnation scene
The Unity game engine was chosen for the implementation of the game, as it is relatively straightforward to learn progressively, allowing for fast development while containing the potential for complexity, both in terms of code and aesthetics (Fig. 3). These were the major QL development tasks. In-game management of simulation data. To connect simulations with gameplay effectively, one of the first tasks was to program a system to serialise and deserialise simulation data effectively. During gameplay, the system will create binary files, each containing the state of the simulation at the end of a time step (i.e. year). These files are re-written every time the simulation is run from an earlier year. Simulation data is deserialised when entering the Incarnation scene and used to generate or configure game objects (e.g., age affects characters’ height). Loading and decorating the 3D landscape. Because AA is placed in a real location (Longhouse Valley, USA), the development team aimed at using real-world spatial data to configure the 3D space experienced during character incarnation. However, this presented three sets of challenges: (i) finding a Digital Elevation Model (DEM) with a good-enough resolution, importing it to Unity, and making it realistic when experienced from the first-person perspective; (ii) applying terrain textures and adding the scenery (natural environment, buildings, and characters) through procedural generation; and (iii) loading and deleting terrain chunks seamlessly around the moving player, which is required given the large size of the entire valley area. After overcoming these challenges, a set of Unity-C# assets were developed and released [1]. Dialogue system. An interactive narrative system using Twine-Tracery (the grammar-expansion library Twine combined with the interactive fiction tool Tracery) was employed to mediate between player and non-player characters. The player’s decisions regarding dialogue options feed information back to the simulation by modifying certain variables (e.g., convincing characters to eat less will decrease the consumption of maize of those individuals). Artistic assets. Audiovisual elements (e.g., 3D models, textures, text, sound effects) in games are critical for player immersion. In QL, the development team used Unity’s own sponsored community (Unity Asset Store), which includes several basic free assets that can be used for learning and prototyping. User interface (UI) and game system. A minimal UI and game system were created for QL using the resources found in Unity Asset Store, including a splash and start
198
T. Szczepanska et al.
screen, options and game start menus, a loading screen, player controllers, and a HUD (heads-up display) showing the current year and convergence percentage in the top-right corner of the screen. QL has been developed as a side project and remains as an unfinished and unpublished prototype. This project still lacks a functional system and text base to handle dialogues, which is the primary action during gameplay and a key factor for immersion. Additionally, to reach QL’s full potential, artistic assets should be curated by experts about the Ancestral Puebloans (e.g., anthropologists, archaeologists, native community representatives).
5 The Conflict Resolution: Where the Protagonist Finally Overcomes the Conflict, Learns to Accept It, or Is Ultimately Defeated by It At the resolution point, usually the story ends. The protagonist fulfills the initial goal, does not fulfill it, or transforms it. In our case, after a close analysis of the journey, we propose the GAM Reflection Framework and invite the reader to discuss its usefulness. How to describe in a meaningful way the integration of an ABM with a Game? Answering this question was not an easy nut to crack. As a first contribution to this answer, we propose a reflection framework. As explained by Rapoport, “[…] frameworks are neither models nor theories. Models describe how things work, whereas theories explain phenomena. Frameworks do neither; rather they help to think about phenomena, to order material, revealing patterns…” [13] (page 256). After closely examining the development of QL, especially the last sentence sounded appealing to us: as a first step in the GAM method, we think that one has to go through a structured reflection about what is being combined, how, and why. Reflection is considered the key learning activity to transform concrete experience into abstract concepts, to generalize main ideas and principles [12]. Moreover, reflection is a process that utilizes knowledge that “lies deep within (tacit knowledge)—so deep it is often taken for granted and not explicitly acknowledged, but it is the data humans use to make instinctive decisions based upon accumulated knowledge from past actions and experience” [8](page 22). As such, it seems to us of crucial importance to have useful tools that guide this process, especially in a tangled process such as using GAM designs. The process of combining ABMs and games in a GAM design can be understood as a process of interdisciplinary research, in the sense that it involves disciplines with contrasting paradigms, forcing researchers to cross subject boundaries in order to create new knowledge, theory, and/or methods and solve a common research goal [17]. As such, the GAM Reflection Framework is an adaptation of the protocol for assessing the interdisciplinarity of models proposed by [18], which maximises the extraction of implicit knowledge and decisions. However, proposing tools seems
Quantum Leaper: A Methodology Journey From a Model in NetLogo …
199
Table 1 The GAM reflection framework (Part 1) applied to the case of Quantum Leaper PART 1 Formal reflections WHAT contributes to GAM What disciplines and knowledge bodies were involved and integrated? In the ABM Archaeology, expertise of ABM modellers In the Game Archaeology, game design, expertise of native communities (at a future point) & Expertise in the various implementation tools (e.g., Unity) HOW GAM is performed Which resources were used? Explain why these were used. Empirical (datasets and sources) Spatial data files given with the NetLogo implementations of the model; height map (or Digital Elevation Model) of the location (Longhouse Valley, Arizona, USA). Source: USGS, through terrain.party. Methodological (methods) Agent-based modelling; Game design for creating open-world 3D first-person games; General storytelling techniques (e.g., rhythm, plot devices), and visual storytelling. Theoretical (theories) Knowledge used for reviewing and extending the AA model: complex adaptative systems; human ecology and demography. Technical (tools) NetLogo (ABM preparation); Unity and C# (game development); Twine-Tracery (interactive text system); Terrain.party (obtaining terrain heightmap); GIMP (image editing); Audacity (audio editing); Free Music Archive, SoundCloud, Freesound (obtaining sound effects and music). WHY GAM is used What new knowledge is produced by the GAM design? What problem it aims to solve? Epistemological (to produce new Experience a multiagent system from a first-person understanding and knowledge) perspective; to enable new insights about the model and the dynamics of the systems it aims to represent. Instrumental (to solve a problem Bridge the gap between the formal, unintuitive definition of or a societal challenge) complex socio-ecological phenomena found in ABMs and the more general understanding of how society relates to environment, particularly but not only by non-modellers.
easier than applying them. Thus, in order to give a taste of its applicability and encourage its use, we provide a demonstration on the QL case, which, out of space consideration, is included directly in the framework (Tables 1 and 2). The core assumptions of the GAM Reflection Framework are: (1) that the analysed application includes an ABM (pre-existing or developed from scratch); (2) that the analysed application includes a Game, with all the necessary elements of a game (e.g., mechanics, dynamics, aesthetics); (3) that both the ABM and the Game co-
200
T. Szczepanska et al.
Table 2 The GAM reflection framework (Part 2) applied to the case of Quantum Leaper PART 2 General reflections Team (organisation, communication etc.)
QL was developed by a two-person team working mostly side-by-side on different tasks. It was noticeable that the team lacked some key skills, particularly those of a trained artist and writer. Most work was done in Unity and, at the time (2017), sharing Unity projects in an orderly way was more challenging than today. Unity now offers a built-in cloud service with version control, through which collaborators can work on the same project. Game engines or platforms Unity is surely one of the most comprehensive and accessible game (pros and cons, engines available at present. The QL prototype was developed challenges etc.) relatively fast thanks to this and given the vast online community of users sharing Unity assets, including C# code snippets. However, it is a tool in constant change and improvement, making learning new features a never-ending necessity. Engaging with some kind of formal learning (e.g. MOOCs) recommendable to make the most of it. Transparency and rigour The team kept an ongoing design document were notes about (measures taken etc.) advancements and new ideas were stored and shared. The code base of the ABM and game system has been constantly tested, refactored and annotated, aiming at making it reproducible and readable for a wider public. Screen video recordings were made after different milestones in development and shared on YouTube. Stakeholders (Pending until after the game is published) (interaction etc.) Outputs/outcomes (Pending until after the game is published) (what was produced, how it was received etc.)
exists and are integrated in one single application; thus, they run simultaneously; (4) that the GAM design has a research purpose. The GAM Reflection Framework is divided in two parts: reflections structured around the interdisciplinarity of the endeavour (Part 1 Formal reflections; Table 1) and reflections structured around the general process of building the agent-based game (Part 2 General reflections; Table 2). While reflection are usually undertaken at the end of a task or activity, we encourage the possible users to use this framework before, during, and/or after the GAM design process is finished. We base this recommendation on findings from research on learning, which explain that in order to make reflection useful for development of cognitive levels and not only of the affective levels, reflection should be implemented in a well-structured, intentional manner with purposeful fidelity throughout the course of activities [5]. We envisage four types of users of the GAM Reflection Framework: GAM practitioners, coordinators of research that includes GAM designs, ABM modellers, and game designers. GAM practitioners can use the framework to increase the value of learning from their own practices, in addition to increasing the rigour and transparency of these practices. Using the framework can also help these users to express
Quantum Leaper: A Methodology Journey From a Model in NetLogo …
201
clearly the interdisciplinary characteristics of their agent-based game. Coordinators of research that includes GAM designs can use the tool to plan the research tasks, while ABM modellers and game designers can use it as guidance to structure the collaborative work in interdisciplinary teams that use these designs or to assess whether such work is something that they want to add to their portfolio.
6 Conclusion Building on experience with interdisciplinary research, on insights from using reflection as a learning tool, and the description of the steps taken to transform the NetLogo ABM “Artificial Anasazi” into a Unity-based immersive first-person video game, Quantum Leaper, this paper attempts to “discipline”, or bring some methodological organisation, in the field of combining ABM and Games. As such, this study provides a framework for reflections during the process of combining these two. We aim at contributing to the discussion and consolidation of methodological principles that are generally applicable to research using GAM, the GAM Reflection Framework. We present a brief demonstration of this framework by examining the Quantum Leaper video game. This framework is intended as a tool that can be combined with other approaches and frameworks, contributing to the GAM field development. The framework is a potentially learning-rich tool for GAM practitioners, coordinators of GAM designs-based research, ABM modellers, and game designers alike. Acknowledgements The first and the fourth author would like to acknowledge the support of the project “SimFish—Innovative interdisciplinary learning in fisheries and aquaculture” (UiT The Arctic University of Norway grant no. UiT Fyrtårn 2015 and Norgesuniversitetet grant no. NUVP47/2016). These authors would also like to thank Harko Verhagen for his encouragement and comments during this study and Loïs Vanhée for his comments.
References 1. Angourakis, A.: Andros-Spica/ProceduralTerrainScenery: procedural terrain scenery (unity) (2020). https://doi.org/10.5281/ZENODO.3881938 2. Angourakis, A.: Andros-Spica/TIPC2-Angourakis-Graham-2018: gaming artificial Anasazi. Applying immersive game design and storytelling to an agent-based model in archaeology (2021). https://doi.org/10.5281/ZENODO.4580392 3. Antunes, R.F., et al.: Animating with a self-organizing population the reconstruction of medieval MéRtola. In: Proceedings of the Eurographics Workshop on Graphics and Cultural Heritage (GCH’17), Eurographics Association, Graz, Austria, pp. 1–10 (2017). https://doi.org/ 10.2312/gch.20171286 4. Axtell, R.L., et al.: Population growth and collapse in a multiagent model of the Kayenta Anasazi in Long House Valley. In: Proceedings of the National Academy of Sciences of the United States of America, vol. 99(Suppl. 3), pp. 7275–7279 (2002). https://doi.org/10.1073/ pnas.092080799
202
T. Szczepanska et al.
5. Cavilla, D.: The effects of student reflection on academic performance and motivation. In: SAGE Open 7(3) (2017). https://doi.org/10.1177/2158244017733790 6. Graham, S.: An enchantment of digital archaeology: raising the dead with agent-based models, archaeogaming and artificial intelligence. Berghahn Books, New York (2020). https://doi.org/ 10.3167/gra7866 7. Guyot, P., Honiden, S.: Agent-based participatory simulations: merging multi-agent systems and role-playing games. J. Artif. Soc. Soc. Simul. 9(4) (2006). https://EconPapers.repec.org/ RePEc:jas:jasssj:2006-57-2 8. Helyer, R.: Learning through reflection: the critical role of reflection in work-based learning (WBL). J. Work-Appl. Manag. 7(1), 15–27 (2015). ISSN: 2205-2062. https://doi.org/10.1108/ JWAM-10-2015-003 9. Janssen, M.: Artificial Anasazi (2013). https://doi.org/10.25937/krp4-g724 10. Janssen, M.: Understanding artificial Anasazi. JASSS 12(4) (2009). http://jasss.soc.surrey.ac. uk/12/4/13.html 11. Kleczkowski, A., et al.: Spontaneous social distancing in response to a simulated epidemic: a virtual experiment. BMC Public Health 15(1) (2015). https://doi.org/10.1186/s12889-0152336-7 12. Kolb, D.A.: Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall, Englewood Cliffs (1984) 13. Rapoport, A.: Thinking about home environments. In: Altman, I., Werner, C.M. (ed.) Home Environments, pp. 255–286. Springer US, Boston (1985). https://doi.org/10.1007/978-1-48992266-3_11 14. Stonedahl, F., Wilensky, U.: NetLogo artificial Anasazi model. Evanston (2010).https://ccl. northwestern.edu/netlogo/models/ArtificialAnasazi 15. Szczepanska, T., et al.: Mixing games and agent-based modeling. Research design patterns. In: Presentation at the 5th Workshop on Integrating Qualitative and Quantitative Evidence Using Social Simulation (2020) 16. Trescak, T., Bogdanovych, A., Simoff, S.: City of Uruk 3000 BC: using genetic algorithms, dynamic planning and crowd simulation to re-enact everyday life of ancient Sumerians. In: Social Simulation Conference (2014) 17. Tress, B., Tress, G., Fry, G.: Defining concepts and the process of knowledge production in integrative research. In: From Landscape Research to Landscape Planning: Aspects of Integration, Education and Application, vol. 13, pp. 13–26 (2006). https://doi.org/10.1016/j.ssi.2010. 08.015 18. Weber, C.T., Borit, M., Aschan, M.: An interdisciplinary insight into the human dimension in fisheries models. A systematic literature review in a European union context. Front. Mar. Sci. 6 (2019). https://doi.org/10.3389/fmars.2019.00369 19. Wilensky, U.: NetLogo. Evanston (1999). https://ccl.northwestern.edu/netlogo/
How Perceived Complexity Impacts on Comfort Zones in Social Decision Contexts—Combining Gamification and Simulation for Assessment Frederick Herget, Benedikt Kleppmann, Petra Ahrweiler, Jan Gruca, and Martin Neumann
Abstract This paper is about the ambiguous love-hate relationship people have with complexity in social decision contexts: There seems to be a tipping point where increasing complexity seen as exciting and satisfying turns to overwhelming and annoying nuisance. People tend to have an intuitive understanding about what constitutes a complex situation. The paper investigates this intuition to find out more about complexity in a bottom-up approach where a complexity definition would emerge from people’s intersubjective understanding. It therefore looks for the relation between a subjective feeling individual people might arbitrarily share with others by chance, and objective, measurable features underlying a decision situation. The paper combines gamification and simulation to address these questions. By increasing complexity in a gamified social decision situation, empirical data is generated about people’s complexity intuitions. The empirical games are then simulated—calibrated by the gamification setting for producing artificial data. The analysis compares the ratings of perceived complexity and satisfaction in empirical games with a set of metrics derived from the simulations. Correlations between participants’ ratings and simulation metrics provide insights into the complexity experience: Sentiments about complexity may be related to objective features that enable a bottom-up definition of measurable social complexity. Keywords Gamification · Simulation · Social complexity measures
1 Introduction Professional distributors of video games nowadays employ a great range of experts for the development of the gaming experience. Games need to be designed such that players do not lose interest after initial playing. This is where the work of the experts starts, who have to transform the general idea and general narrative of the game into an interesting immediate experience by the design of the levels and tasks F. Herget (B) · B. Kleppmann · P. Ahrweiler · J. Gruca · M. Neumann Johannes Gutenberg University Mainz, Jakob-Welder-Weg 20, 55128 Mainz, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_16
203
204
F. Herget et al.
players have to face. The “level-boss” must not be defeated too easily nor impossible to defeat. This might mean losing a portion of potential players and erecting barriers by the game’s complexity. Finding a good balance is a permanent challenge for game developers. People seem to be instinctively attracted to increasing complexity making life more interesting. However, at a certain point, growing complexity seems to become intractable, increasing the desire for simplicity. This everyday observation relies on a certain assumption about the issue of complexity. ‘Complexity’ is something that is related to human perception and sentiment concerning a certain action context. It is about humans dealing with perceived complexity. The observation seems to imply the hypothesis that there is a tipping point up to which increasing complexity is tractable, attractive and desirable, and after which it becomes intractable, scary and annoying. This paper is about operationalising and testing the tipping point hypothesis. It tries to reconstruct a complexity landscape where people can judge what is considered as too complex or not complex enough and where they can agree or disagree upon tractability. Is ‘too much complexity’ or ‘not enough complexity’ in a specific social decision context just a subjective feeling individual people might arbitrarily share with others by chance, or are there objective, if not measurable, features underlying the social decision situation that agree with people’s perceptions and sentiments? People tend to have an intuitive understanding about what constitutes a complex situation. Can we use this intuition to find out more about complexity in a kind of bottom-up approach where a complexity definition would emerge from people’s intersubjective understanding and not from a top-down imposition by some theorist? Is the tipping point existent for everybody? Is it the same for everybody? What features bring it about? All these questions are interesting, because they point to intersubjective features that underlie complexity perceptions/sentiments, and, with that, to ‘objective’ features that can be related to human perceptions/sentiments. The approach to identify intersubjective features of judging something as complex and relate it to objective features is twofold and involves two types of ‘laboratories’— one physical and the other computational. On the one hand, empirically measuring perceptions of complexity requires a laboratory-like situation that allows to consider observations as more than anecdotal evidence. This is achieved by employing the method of gamification. By letting people play a set of games, where games can be seen as social decision systems, a complexity landscape in a controlled physical environment is created. These games consist of a framework of rules that the players can be subject to. This laboratory for measuring human judgement in decision contexts by gamification explores observable characteristics of perceived game complexity and delves into people’s reasoning concerning sentiments behind. The gamification lab produces empirical output data. On the other hand, measuring objective features requires a more formalised environment. Thus, the gamification is complemented with a simulation of the games in a computational laboratory. To search for objective features in the laboratorycontrolled complexity landscape, the games empirically played by the human players will be simulated on the computer. This reproduces the games rule space and
How Perceived Complexity Impacts on Comfort Zones …
205
the empirical games output data in the artificial realm checking against ‘objective measures’ of complexity such as algorithmic complexity or pattern complexity in the output database. Simulating the empirical games intends to translate the features that underlie human judgements of complexity into an environment where these features must present themselves as objective measures of the simulation. Hence, the combination of the physical and the computational lab allows answering the research questions by comparing features in processes and output. However, we must be prepared that the controlled world of games will not let us reach what Knight [1] calls ‘true uncertainty’, and with this, an important feature of complexity. This is an ‘out-of-lab’ situation of the real world: Decision making in social systems is contextualised in dynamic environments. There is an “antagonism between isolation of a system and contingency through contextualisation” [2]. Contextualisation situates decision systems in permanently changing environments. Each specification of the precise environmental condition existing in time and space can be relevant for the dynamics rejecting all options for calculation and prediction. Furthermore, people are a source of permanent turmoil in decision systems due to their capacity to disappoint our behavioural expectations at any point in time. In our paper, we will ‘simulate’ this out-of-the-lab experience with just another game that comes close to the situation of double contingency in human interaction and contingency. The next section introduces the state of the art in the field leading to a set of research questions and an approach to investigation. This is followed by a description of the models we used, a summary of the metrics generated, and a list of hypotheses associated to these metrics. Finally, we will give an overview about the results, discuss them, and provide an outlook for further research.
2 State of the Art What makes a decision situation ‘complex’? Psychology and Cognitive Science coined the term ‘cognitive complexity’ first introduced by psychologist James Bieri [3, 4] in 1955. This research approach is based on creating constructs in a grid. If different constructs are construed in a similar way the whole system is “simple”, if not it is considered “complex”. These terms were later revised by W.H. Crockett [5] as “differentiation” and “integration”. The measured complexity can be used to predict behaviour. Apart from its embedding in personal construct psychology cognitive complexity has been influential on the study of human–computer interaction. In particular, similarities to Kolmogorov complexity [6] can be established that determine complexity of a given entity by the length of the shortest algorithm that can create this entity as an output. Cognitive complexity is mostly seen as an expansion of Kolmogorov complexity. This gave rise to “Simplicity theory”, which states that the attractiveness of a situation is based on it being simpler than predicted. Important contributions were made by researchers from various backgrounds, such as Jürgen Schmidhuber
206
F. Herget et al.
[7], Jean-Louis Desalles [8], Jacob Feldman [9], Nick Chater [10] and Paul Vitanyi [11]. A similar direction is taken by the work of Nobel laureate Daniel Kahnemann, who summed up his work in “Thinking, Fast and Slow” [12]. According to Kahnemann, decision making processes happen in one of two modes: The fast mode that is intuitive and emotional or the slow mode that is logical and deliberative. This points to the idea that simplicity is connected with intuition and emotions, while high complexity is connected to the necessity to rational analysis. This area of research on complexity is highly connected to the computer sciences and information theory that deals with issues of computational complexity [13] or entropy. Particularly the latter was strongly influenced by the works of Kolmogorov, Claude Shannon, Harry Nyquist and Ralph Hartley and has been greatly extended in the last decades [14]. Present-day modern societies require citizens to process complicated, highly specialised information for forming opinions and deciding on complex societal issues. Usually, ‘decision’ is understood as a choice between alternatives based on preferences [15]. However, this understanding is insufficient: Not only because ‘preferences’ are a black-box for all sorts of ideas about why people are doing what they are doing; this simplistic understanding of decision making also assumes a closed world structured into stable alternatives that stand for choice. A famous example for the faults in this assumption is the so-called El Farol problem put forward by mathematician Brian Arthur [16]. Complexity science perspectives like Arthur’s locate social decision making in turbulent environments with high uncertainty and ambiguity: They assign to social decision making multi-scale dynamics with high contingency and non-linearity, emergence, pattern formation, path dependency, recursive closure, and self-organization. The decision problems described by the El Farol example, however, are not only insights from complexity theory, but concern a fundamental sociological concept about social life: “There is a double contingency inherent in interaction. On the one hand, ego’s gratifications are contingent on his selection among available alternatives. But in turn, alter’s reaction will be contingent on ego’s selection and will result from a complementary selection on alter’s part. Because of this double contingency, communication, which is the precondition of cultural patterns, could not exist without both generalization from the particularity of specific situations (which are never identical for ego and alter) and stability of meaning which can only be assured by ‘conventions’ observed by both parties.” [17]. We permanently create and change the world we want to predict for decision making. There is no analytical solution to the problem, which would allow us to ‘choose between alternatives following our preferences’. Alternatives are not stable; choice is not an option. The only stabilizing features we have to ground our expectations in are norms, habits, roles, or institutions [18], which will give us some scaffolding but no certainty. The El Farol bar problem has been implemented as an ABM in the common Netlogo model library as a didactic example to teach about complexity issues in social decision making. Such a type of ABM has much in common with board games. In a game the players are allowed to execute certain rules that are given by
How Perceived Complexity Impacts on Comfort Zones …
207
the game. Certainly, the players typically have options to choose between different rules, bringing in an element of stochasticity. This is the same as agents do in an ABM: Agents act according to rules, typically also with an element of stochasticity. Thus, close correspondence exists between games and ABM. For instance, in teaching ABM, the Schelling model [19] can also be played with humans instead of being executed by a computer in order that students’ can gain a physical experience of what is going on in an ABM. This similarity suggests to systematically complement ABM and games in an ongoing overall research design. In fact, such a combination has often been done. A recent review of the literature on games and ABM [20] based on in-team knowledge and a systematic review in Scopus and Science Direct examined a total of 52 papers that describe a combination of games and ABM. Results show that games and ABM have been combined since the early 2000s and have been increasingly combined since 2010.
3 Research Questions and Approach Reviewing the state of the art, many questions are still open, among them those about the exact relationship between intersubjective perceptions and objective definitions of complexity. As we mentioned in the introduction, people tend to have an intuitive sense of complexity for which they do not require a clear conception. The question is thus, if we can use this intuition to find out more about complexity in a kind of bottom-up approach where a complexity definition would emerge from people’s intersubjective understanding and not from a top-down imposition by some theorist? However, it is noteworthy that this experimental definition of complexity might blur its meaning. It is not obvious, how and if players make a difference between e.g., complexity and mere complicatedness. This implies that when we talk about complexity in this paper, we refer to what players considered as such within the games and not necessarily anything that follows from or corresponds to a lexical definition. Yet, the bottom-up nature of the situation might nonetheless provide a conceptual understanding insofar as there is a consistency in the players’ perception, i.e. if the players agree on too much or too little complexity. The general idea of our setup is to combine insights from a gamified empirical situation with metrics that can be inferred from the simulation. The games described in the next section were played empirically during a workshop and later simulated. The primary focus is to compare the ratings of perceived complexity from participants of the empirical games with a set of metrics derived from the simulations and to study associated correlations. This allows us to correlate sentiments with operationalisable measures. In the ideal case of stable correlations between participants’ ratings and certain simulation-metrics, we get an intrinsic understanding of the complexity experience. This intrinsic understanding does not require an understanding what psychologically causes the participants to perceive a complex situation or what constitutes a complex situation in an extrinsic and objective sense. We are, therefore, able to avoid a reductionist approach to the study of social complexity:
208
F. Herget et al.
A situation that will be considered complex can be identified without requiring an understanding of what makes a situation essentially complex by operationalising it via metrics derived from simulation.
4 Model Description Empirical data was collected at a participatory workshop. Most of the 20 workshop invitees were academics in different career stages ranging from university student to professor. Several group exercises and a series of gamification sessions were used to prove the plausibility of hypotheses related to complexity perception. One session was a modified application of the agent-based model “Party” which can be found in the NetLogo software models library [21]. The first of five games was played following the original party simulation rules: “This is a model of a cocktail party. The men and women at the party form groups. A party-goer becomes uncomfortable and switches groups if their current group has too many members of the opposite sex (…) The party-goers have a TOLERANCE that defines their comfort level with a group that has members of the opposite sex. If they are in a group that has a higher percentage of people of the opposite sex than their TOLERANCE allows, then they are considered “uncomfortable”, and they leave that group to find another group. Movement continues until everyone at the party is “comfortable” with their group” [22]. In our empirical situation the change of tables was realized via throwing a dice. Upon finishing their first game the group of players was asked to think of new rules to alter the condition under which a player would feel uncomfortable. In a plenary discussion, the group decided on five rule modifications that were feasible for the implementation in an actual game. In a voting, the altered rules were then placed on a rating scale indicating expected complexity (Table 1). Table 1 Party simulation rule modifications created by the workshop participants Rule
Text
Original rule Leave the table if 50 percent or more players of the opposite sex are at your table! Rule 1
Leave table after two rounds
Rule 2
There is a topic in every round (e.g. hobbies). The ones at the table with similarities must stay, the others move. (topics: hobbies, heights, birth month, favourite colour)
Rule 3
Those sitting next to a person who wears white socks or no socks move
Rule 4
Of those who are uncomfortable according to the original rule, only the ones born in January, February, March, April, May, or June move. The others stay. Uncomfortable people choose one person to stay with and only one is throwing the dice (the next time the other one is throwing the dice)
Rule 5
Throw dice to indicate how comfortable you are; even nr = comfortable - uneven nr = uncomfortable
How Perceived Complexity Impacts on Comfort Zones …
209
Table 2 Collective rating of predicted complexity before playing the games; individually perceived complexity and individual satisfaction after playing the games (1 = low, 5 = high) Rule
Predicted complexity
Perceived complexity
Perceived complexity
Participant satisfaction
Participant satisfaction
Rule 1
1
1.95
2
2.95
3
Rule 5
2
1.45
1
2.35
2
Rule 3
3
3.85
4
3.25
3
Rule 2
4
3.55
3.5
1.75
1
Rule 4
5
4.2
4.5
2.3
2
a Ranking
of the players b Mean of players’ individual complexity perception c Median of players’ individual complexity perception d Mean of players’ individual satisfaction level e Median of players’ individual satisfaction level
Starting with the lowest-performing rule on the complexity scale the games were played sequentially. Each ended either by reaching a stable state or running into a time limit. Afterwards, the players were instructed to individually validate the collectively predicted complexity ranking and to correct it by their own complexity perception whenever necessary. These ratings were remarkably consistent among players. Cronbach’s alpha for the complexity perception of players is 0.964. Additionally, each player rated their level of satisfaction when playing the games on a five-point scale for every rule (Table 2). Complementing this empirical data on perceived complexity, a computational simulation reproducing the workshop games was used to measure potential complexity metrics. The computational implementation of the party simulation was built as a Python script allowing the combination of a simulation with libraries for statistical and computational analysis. Of the twenty workshop participants eleven were female and nine males, allowing the implementation of a similar distribution of a binary attribute in the simulated population. However, the empirical distribution of attributes necessary for the subsequent four games had to be obtained from public datasets, see the attachments. This regarded: their hobbies, height, birth month, favourite colour, and sock colour. This setting, however, has certain limitations that we hope to overcome in future work: Both, the computational metrics such as runtime of certain algorithms and the empirical games, require extension and further standardization. One of the limitations is regarding an implicit influence on the conception of complexity by the design of the game situation. To add ‘El Farol’-type complexity mimicking the situation of double contingency, a sixth game was designed by participants with the following rule that can be added to each of the other rules except Rule 1 and 5. Rule 6: “if you are unhappy according to the current rule, decide for a table to move to where you would be more happy according to the current rule. Note down your decision on a piece of paper without showing others or making your decision
210
F. Herget et al.
known. Everybody moves at the same time following a signal. Do not change your original decision whilst moving”. By having to anticipate, where they might be happy in the next round, the players also must anticipate each other’s strategy and thus create a double contingency.
5 Results 5.1 Metrics The simulation of these games aimed at finding correlations between computational measures and the players’ responses to how complex they perceived the games. The simulations unfold differently on every run, as they depend on random factors such as the initial sampling of the participants attributes and the random reassignment to a new table when moving. Therefore, we ran each of the five game simulations 1000 times and studied from the logs of these 5000 simulations the following nine measurements: “Mean execution time for moving (s)” (Metric1); “Percentage of runs that reached a stable state” (Metric2), where all actors were comfortable; the “Mean time to stable state” (Metric3); the “Variance between times to stable state” (Metric4); the “Mean number of movers” (Metric5) at each timestep; the “Variance between timesteps for number of movers” (Metric6); the “Variance between runs for number of movers” (Metric7); the “Entropy of table population at stable state” (Metric8). These metrics represent three categories of complexity that might manifest in the simulations: The processing effort (Metric 1 and 2) to find a decision for individual actors, the sorting effort (Metric 3 and 4) for all players to collectively reach a stable state, and the regularity (Metric 5 to 8), i.e. where there is a strong symmetry in the number of movers across different runs of the same game. To each of these metrics we formulated a hypothesis (H1–H7). The difference between the five games are the rules that determine which participants move. Generally, one might assume that rules that take longer to process are also more complex. The simulation captured and logged the time it took at every timestep to determine which participants should move and to execute the moving (Table 3). H1: More computational time correlates positively with increased complexity perception. (Metric1). Each of the 5000 simulations was run for 100 timesteps. These simulations might change in the first timesteps, but then reach a state where no more moving occurs, because none of the participants is uncomfortable. We declare a state as stable if in the next three timesteps no more moving occurs and counted the number of simulations such a stable state occurred. The associated metric is the percentage of simulations reaching the stable state.
How Perceived Complexity Impacts on Comfort Zones …
211
Table 3 Metrics derived from simulation Rule
Metric1
1
0.000034
Metric2 (%)
Metric3
Metric4
2
0.000191
0
3
0.000039
92
6.86
60.48
4
0.000053
100
4.34
28.02
5
0.000037
0
0
Metric5
Metric6
Metric7
10.00
100.00
0.00
Metric8
9.02
14.50
11.87
2.83
2.79
3.83
1.24
2.38
4.95
28.38
1.45
9.99
4.98
0.47
H2: Termination in a stable state negatively correlates with perceived complexity. (Metric2). For those runs that did reach a stable state within the 100 timesteps, we measured after how many timesteps this state was reached. The associated metric is the mean number of timesteps it took for the simulation to reach a stable state. Since the time to reach a stable state varied, we also quantified the variance in the time-to-stable-state amongst those runs that reached stable state. H3: Mean time to stable state correlates with perceived complexity. (Metric3). H4: Variance of the time to stable state correlates positively with perceived complexity. (Metric4). The five games have a common pattern: at every timestep, some people move to another table according to the rule of the game. We therefore also tracked the mean for the number of people moving as well as the variance. Intuitively, one would assume that more movement within the game makes the game more complex. H5: Mean number of movers and perceived complexity correlate. (Metric5). For some games, the number of movers varies strongly between timesteps. It might be the case that more regular games are perceived as less complex due to this symmetry. Thus, we additionally considered the variance between timesteps of the number of people moving and the variance between runs for the number of people moving. H6: Variance of the number of movers between timesteps/runs correlates positively with perceived complexity. (Metric6 and Metric7). Another game property that might have had an influence on the perceived complexity is the nature of the stable state. Therefore, we also tracked the entropy of the stable state. If all 20 actors were sitting at one table, we had an entropy of 0. If four people sat at every table, we had reached the maximum entropy of 1.609. H7: Entropy correlates positively with perceived complexity. (Metric8).
212
F. Herget et al.
5.2 Analysis The workshop participants were asked to grade how complex they perceived the games, using a scale from 1 to 5. We investigated the correlations between the median of the players’ perceived complexity ratings and the game metrics we obtained from the simulations. Because the players’ perceived complexity was measured is an ordinal scale we are using the median perceived complexity and use Kendall’s tau-b (τb) as correlation coefficient. Rule 1 was not included for calculating the correlations with Metrics 2, 6 and 7, as these metrics are directly specified by the rule. In Table 4, we should ignore the correlations between the participants’ perceived complexity and the metrics 3, 4 and 8. This is because we only have two values for these metrics—the calculated correlation is meaningless for so few datapoints. The two metrics “Percentage of runs that reached stable state” (Metric2) and “Mean number of movers” (Metric5) seem very related as it makes sense that the games with fewer movers are also the games that are more likely to reach stable state. The negative correlation of these two metrics with the perceived complexity might be an artifact of Rule 1 and 5 that never lead to a stable state and involve many movers. The negative correlation between the participants’ perceived complexity and the variance between timesteps for number of movers (Metric6) is counterintuitive: we expected the games with more varying numbers of movers to also be perceived as more complex. In this case, most of the correlation is due to the strong effect of Rule 1. If we recalculate the correlation without Rule 1, the correlation becomes 0.07. The positive correlation with the metric “Mean execution time for moving (s)” (Metric1) was expected. It is reasonable to assume that rules of games that take the simulation a relatively long time to identify the movers are also the games where it took the workshop participants a long time, leading to them perceiving it as complex. The metric “Variance between runs for number of movers” (Metric7) is the primary regularity measurement. If the number of movers varies strongly between different runs of the same game, then this is a strong indication that this game is not very regular. This correlation confirms what we intuitively believe to be true: less regular/repetitive games are perceived as more complex. The entropy of table populations (Metric8) seems to be in line with this. It seems intuitive that less clear sorting outcomes are associated with a more complex game. However, this metric is limited by the small amount of data points. To see whether the correlation holds, requires adding further empirical games that reach a stable state. Table 4 Correlations with participants’ perceptions Metric1
Metric2
Metric3
Metric4
Metric5
Metric6
Metric7
Metric8
0.33
−0.77
−1
−1
−0.83
−0.47
0.74
1
How Perceived Complexity Impacts on Comfort Zones …
213
6 Discussion As mentioned in the introduction, our tipping-point hypothesis is that satisfaction follows an upside-down U-curve with respect to perceived complexity—see Fig. 1. In words, this means that games/environments that are perceived as very simple are considered boring and thus are not very satisfying. In turn, games that are extremely complex, are not satisfying either, because the complexity is overwhelming. Thus, the environments that cause the maximum satisfaction are those with medium complexity. We can determine the true satisfaction curve from the workshop data. The 20 workshop participants were asked to rate both the perceived complexity and the satisfaction, see Table 2, obtained from each of the 5 different games. That is in total 100 data points (20 participants * 5 games) relating perceived complexity and satisfaction. In Fig. 1, we can see the result from fitting a multinomial regression curve to this data. As we can see, the curve is not the upside-down U-curve we had expected; it, however, shares the aspect that the satisfaction decreases for very complex games. The evidence indicates not a symmetric inverted U-shaped relation. Nevertheless, it seems that a tipping point of maximum satisfaction with complexity of games exists. The picture for the confirmation of our hypotheses is mixed. It seems that the lack of confirmation for H5 to H6 is an artifact of the choice of rules by the players. Particularly Rule 1 distorts the measurements. In turn, the clear lack of confirmation for H3 and H4 seems to indicate that Metric3 and Metric4 do not relate to complexity perceptions of players. This again might be an artifact of Rule 3 and 4. Yet, it shows that an increase in complexity perception can occur in games that reach a stable state sooner and more clearly. The time it needs to ‘sort things out’ is, thus, not always related to complexity perceptions. It is puzzling that Metric 6 and 7 do not correlate and that we do not get a confirmation nor rejection of H6. The intuition of increasing regularity correlates negatively with perceived complexity remains thus unconfirmed. Further investigation into this is necessary. The correlation of Metric 1, 2, and 8 was expected in Hypotheses 1, 2, and 7. Whether this is an artifact of the
Fig. 1 Expected satisfaction curve and fit multinomial regression for the satisfaction curve
214
F. Herget et al.
choice of games or indicates possible connections to complexity perceptions must be subjected to further research. The modest correlation of Metric 1 in turn might be reassuring that it is not an artifact. The correlations of Metric 1, 2, and 8, thus, indicate a relation between perceived complexity and objective measures. The mixed picture on the confirmation of our hypotheses translates into a mixed picture on the three categories of complexity metrics. While the processing complexity measures both have strong correlations, the sorting complexity measures do not turn out to be relevant. Regularity does not provide a clear picture. While most of the regularity metrics do not yield a confirmation of our hypothesis, Metric 7 and 8 seem to indicate a relevance of regularity on complexity perception. Concerning our research questions, results indeed indicate that we can use people’s intersubjective understanding of complexity for finding an emerging complexity definition that connects these intuitions to measurable features underlying social decision situations: processing difficulties, and, to a certain extent, regularity issues.
7 Outlook To find out more about the difference between complexity that is just difficult to calculate such as above and situations where it is impossible to find an analytic solution for a—yet non-random—social decision context, the next research step is to add an ‘El Farol’-type situation of double contingency in turbulent environments. Such a game will supplement the research in future workshops. Sourcecode https://www.comses.net/codebases/ab021328-e0df-49d3-ac93-806852e08326/rel eases/1.0.0/
References 1. Knight, F.H.: Risk, Uncertainty, and Profit. Kelley and Millman, New York (1921) 2. Krohn, W.: Realexperimente - Die Modernisierung der “offenen Gesellschaft” durch experimentelle Forschung. Erwägen Wissen Ethik 18(3), 343–356 (2007) 3. Bieri, J.: Cognitive complexity-simplicity and predictive behavior. J. Abnorm. Soc. Psychol. 51, 263–268 (1955) 4. Bieri, J., Atkins, A.L., Briar, S., Leaman, R.L., Miller, H., Tripodi, T.: Clinical and Social Judgment: The Discrimination of Behavioral Information. Wiley, New York (1966) 5. Crockett, W.H.: Cognitive complexity and impression formation. In: Maher, B.A. (ed.) Progress in Experimental Personality Research, vol. 2, pp. 47–90. Academic Press, New York (1965) 6. Kolmogorov, A.: Three approaches to the quantitative definition of information. Int. J. Comput. Math. 2(1–4), 157–168 (1968) 7. Schmidhuber, J.: What’s interesting? Technical Report IDSIA-35–97, Lugano (1997) 8. Dessalles, J.: La pertinence et ses origines cognitives. Hermes-Science Publications, Paris (2008)
How Perceived Complexity Impacts on Comfort Zones …
215
9. Feldman, J.: How surprising is a simple pattern? Quantifying “Eureka!” Cognition 93, 199–224 (2004) 10. Chater, N.: The search for simplicity: a fundamental cognitive principle?. Q. J. Exp. Psychol. 52(A), 273–302 (1999) 11. Chater, N., Vitányi, P.: Simplicity: a unifying principle in cognitive science? Trends Cogn. Sci. 7(1), 19–22 (2003) 12. Kahneman, D.: Thinking, fast and slow. Farrar, Straus and Giroux, New York (2011) 13. Sanjeev, A., Boaz, B.: Computational Complexity: A Modern Approach. Cambridge, UK (2009) 14. Yeung, RW.: A First Course in Information Theory. Kluwer Academic/Plenum Publishers (2002) 15. Arrow, K.J.: Social Choice and Individual Values. Wiley, New York (1951) 16. Arthur, W.B.: Inductive Reasoning and Bounded Rationality. Am. Econ. Rev. 84, 406–411 (1994) 17. Parsons, T., Shils, E. A.: Toward a General Theory of Action. Cambridge (1951) 18. Luhmann N.: Soziologische Aspekte des Entscheidungsverhaltens. In: Lukas E., Tacke V. (eds.) Schriften zur Organisation 2, Springer VS, Wiesbaden (2019) 19. Schelling, T.: Dynamic models of segregation. J. Math. Sociol. 1(2), 143–186 (1971) 20. Szczepanska, T., Antosz, P., Bernd, J., Borit, M., Chattoe-Brown, E., Mehryar, S., Meyer, R., Onggo, S., Verhagen, H.: Mixing games and agent-based modelling. A systematic literature review (forthcoming) 21. Wilensky, U.: NetLogo. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1999). http://ccl.northwestern.edu/netlogo/. Accessed 30 May 2021 22. Wilensky, U.: NetLogo Party model. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1997), http://ccl.northwestern.edu/netlogo/ models/Party. Accessed 30 May 2021
A Hybrid Agent-Based Model to Simulate and Re-Think Post-COVID-19 Use Processes in Educational Facilities Davide Simeone, Silvia Mastrolembo Ventura, Sara Comai, and Angelo L. C. Ciribini
Abstract The paper presents the development and application of a hybrid multiagent system to simulate people’s behavior in educational facilities, to support decisions and strategies related to the post-COVID-19 scenarios. Complex use phenomena as the ones occurring in schools and educational facilities, required mixed, hybrid simulation models where the agent-based component, usually controlling single users/bots, is combined with a process-driven engine that ensures correspondence of the users’ behaviors to the general scenario. In our case, hybridization includes also the direct interaction of the intended users through a virtual reality 3D game, to further increase accuracy and adherence to the reality of the simulated phenomena. This paper also presents the application of the simulation model to a real case study, a school in Italy, where use processes have been simulated and currently under assessment during the school opening. Keywords Behavioral simulation · Gamification · Educational facilities · Agent-based modeling · BIM
D. Simeone Agency – Agents in digital AEC, 00184 Rome, Italy e-mail: [email protected] S. M. Ventura (B) · S. Comai · A. L. C. Ciribini Department of Civil, Architectural, Environmental Engineering and Mathematics, University of Brescia, 25123 Brescia, Italy e-mail: [email protected] S. Comai e-mail: [email protected] A. L. C. Ciribini e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_17
217
218
D. Simeone et al.
1 Introduction The COVID-19 pandemic is having a strong impact on the way buildings are occupied, used and experienced, both from a short-term and long-term perspective. While nowadays—with the pandemic still active—many buildings are closed or functioning at a very limited rate, many open questions are arising regarding how re-think building use processes after the emergency, allowing for better resiliency and flexibility [1]. In this context, schools and educational facilities are among the building typologies more impacted by the pandemic, with distance learning providing only a palliative solution to the education necessities of students [2]. Researchers, architects, school managers, and even governments are currently focusing their efforts on defining new design approaches to support schools’ safe re-opening and flexible calibration of their use in accordance to the evolution of the pandemic [3]. Digital paradigms, such as building information modeling (BIM), building energy modeling (BEM), agent-based simulation (ABM) and internet of things (IoT) play a key role in this process, contributing to the tasks of re-thinking school spaces—including their technological components -, predict their intended use, and monitor and control the real occupancy phenomena [4]. As part of this common attempt, our research is investigating the use of behavioral simulation, based on a “hybrid” agent-based model, to predict and assess the efficacy and the applicability of school different use scenarios, allowing to virtually test different solutions before implement and adopt use protocols in real life. In this work, the term “hybrid” refers to the integration, within the simulation framework, of three components: besides the usual (1) Agent-Based system, that essentially controls the behavior of the people within the building (such as students, educators, parents, etc.), we introduced (2) a process/scenario controller that drives the overall building use phenomena as per intended COVID-19 protocols and (3) an avatar controller that allow the different profiles to actively act in and interact with the built environment and the agents. A game engine (Unity3D) acts as a middleware that integrates these components with the BIM-based representation of the building, offering both visualization features and a friendly user interface. The presented model has been applied to support decisions related to the reopening of a large elementary school in Milan, allowing a relevant comparison between simulated and real users’ behaviors.
2 Re-Thinking Use Processes in Educational Facilities 2.1 The COVID-19 Impact on Behaviors and Use Processes in Educational Facilities The pandemic emergency due to SARS-CoV-2 has forced all Italian educational institutes to suspend in-presence teaching activities. From February 2020 to September
A Hybrid Agent-Based Model to Simulate …
219
2020, distance learning solutions and online teaching resources had to be adopted to support the continuity of teaching and learning, while complying with the social distancing rules determined to counter the virus spread. On the other hand, studies exist that demonstrate how the COVID-19 crisis and the measures adopted to contain it may affect students’ learning and children’s achievement with short and long term effects, influencing negatively skills acquisition [5]. For this reason, COVID-19-related protocols have been emanated by the Italian government to support a safe school re-opening from September 2020, including: – Social distancing; – Mask-wearing in circulation paths; – Temperature testing before entering school (37.5 °C is the maximum temperature allowed); – Hands sanitizing with a hydro-alcoholic solution and frequent washing; – Micro-communities’ organization; – Regular room ventilation. Re-opening rules to be adopted for learning spaces were described in two protocols by a dedicated Italian Technical and Scientific Committee (CTS) with a focus on social distancing and a maximum number of students allowed in classrooms and circulation paths. The former was published in May 2020 and determined that at least 1 m of interpersonal distance should have been respected between end-users, both sitting at the school desk and walking in circulation paths (i.e., dynamic meter). The latter was released in July 2020 to facilitate school directors in guaranteeing in-presence teaching activities and it asked only for the observance of 1 m of interpersonal distance in classrooms (i.e., static meter). Nevertheless, both protocols state that students over six years old must wear masks when walking in circulation paths. An existing school building has been selected as a test-site to analyze the reopening steps as required by Italian protocols and regulations in force to face the COVID-19 emergency. The aim is to evaluate how a hybrid multi-agent system could simulate people’s behavior in educational facilities and support decisions and strategies related to a safe school re-opening in COVID-19 scenarios.
2.2 The Case Study The selected facility consists of school building made of three floors. It is attended by children from 3 to 10 years old. The basement comprises a canteen and the kitchen. The ground floor hosts 4-to-5-year-old classrooms while, on the first floor, there is the 3-year-old classroom. Finally, the second floor hosts all the classrooms of the primary school (6-to-10-year-old students) and they are within the scope of this paper. The school manager provided plans and elevations of the school building in.dwg format and these data were integrated with a digital survey using the Indoor Mobile Mapping (iMMS) technology to acquire three-dimensional geometries and digital photographic documentation of the spaces [6]. A three-dimensional parametric model
220
D. Simeone et al.
was subsequently created in a BIM authoring platform to be used as a basis for further analyses about spaces and arrangements of end-users flows. In the BIM model, classrooms and circulation paths for school entrance and exit have been modelled, including thermo-scanner and hands-sanitization checkpoints. The analysis of the available information allowed highlighting the need for additional learning spaces according to the COVID-19 protocols in-force. Moreover, preliminary simulations of end-users flows, implemented by a crowd simulation software (i.e., Oasys MassMotion), have been developed to calibrate the intended use scenarios. Circulation paths have been analyzed to evaluate the maximum number of students allowed moving within the building at the same time and, eventually, modify the school time organization. Proximity modelling tools (Fig. 1) included in MassMotion have been used to engineer safe social distancing and manage school re-opening safely. Normative requirements in terms of social distancing, in fact, have been set in their computable form within the analysis tool in order to simulate people movement within the educational building.
Fig. 1 Proximity map. Low density identified by the blue color, high density indicated by the red color
A Hybrid Agent-Based Model to Simulate …
221
3 Literature Review In current architectural design practice, Agent-Based Modeling is al already consolidated approach to simulate and assess occupancy of buildings—in particular those where the key aspect is the people movement, such as stations or airports—or emergency egress, usually through crowd simulation tools [7]. At the same time, limits emerged when ABM has been applied to facilities that imply more structured or complex use processes partially overcome by an activities AI-scheduler [8] or event-based/narrative-based approaches [9, 10] to direct agents. Hospitals are probably the facilities where this kind of simulations have been more applied in the last years, essentially for two reasons: (1) their use processes are usually based on very straight-forward protocols—especially those that happen within medical departments—favoring clarity in the scenario depiction, and (2) use processes improvements can direct impact on functionality and management costs of the facilities. Nevertheless, when buildings or facilities do not have these well-chained sequences of activities, or where complexity and autonomy of users’ behavior still overpower pre-defined scenarios—such as in schools, or in offices—these models show some difficulties, and probabilistic approaches are usually adopted [11]. In the near field of game development, the necessity of reaching high levels of realism of simulated phenomena has led, in the last 10 years, to hybrid systems that integrate agents (BOTs), multiple players (avatars) and smart narratives, often supported by agents progressive training using reinforcement learning techniques [12]. In the research presented in this paper, a similar approach has been investigated and implemented to enhance the adherence to the reality of the simulated occupancy, even by directly involving the future users of the building, both improving the efficacy of the COVID-19 protocols and training students and operators before their actual entrance in the school.
4 Methodology The presented work was developed with the intent of providing support in the definition of COVID-19 protocols to a real case study and contemporarily derive a reusable approach applicable to other schools but also extendable to other building typologies. As shown in Fig. 2, the following research and development activities can be organized into three major steps: 1.
Model and plan phase: This preliminary phase includes the activities of (1.1) development of the building information model of the school (including furniture and devices related to COVID-19 protocols), (1.2) the definition of use protocols through collaboration with managers and operators in the school and (1.3–4) preliminary crowd simulations to calibrate the intended use scenarios.
222
D. Simeone et al.
Fig. 2 Research and development activities workflow
2.
3.
Development phase: Focused on the (2.1–2) programming of the behavior of agents/Bot and of the use process controller and the construction of the avatar controlling (2.3) including the User Interface (2.4). Simulate, assess and learn phase: That includes the performing of the integrated simulation (3.1), the assessment of the simulated phenomena even by comparison with real-life (3.2) and the use of the model to train students and teachers regarding the COVID-19 protocols (3.3).
The direct collaboration with the case study school manager and teachers, as well as the opportunity to witness the real use phenomena once the school opened its door to the students, are two factors of this research contributing to direct simulations to answer real issues and open problems from both design and management perspectives. Within this context, the present work mainly presents the implementation of the integrated simulation model (steps from 2.1 to 3.1), discussing the implications of integrating ABM, process-driven simulation and VR playing features.
5 The Hybrid Agent-Based Model 5.1 Framework As described above, a key role in the process is represented by the development of the hybrid simulation model aimed at representing the complexity of the school use process and the behavior of its inhabitants. We chose to rely on the use of a game engine (Unity 3D) since it allows to (1) import or the direct connection with BIM models, (2) program simulations also including ABM resources, and (3) develop a user interface for virtual involvement of future users of the building. We introduce the
A Hybrid Agent-Based Model to Simulate …
223
idea of “hybrid” simulation as a way to progressively reach higher levels of realism and accuracy of the phenomena resulting from the simulation. In our initial school experiments, we witnessed that pure agent-based modeling, although “tempered” by process-based controllers, was still not sufficient to deal with the complexity and variability of children behaviors and, as a result, simulated phenomena were diverging from real scenarios. As in videogame design, middleware platforms such as Unity 3D allows to essentially develop the different components as independent and then work in designing the interaction rules among them. In this case, we defined a simplified ontology of the system, aimed at guiding the development activities (Fig. 3). The following assumptions have been made: • process/scenario controller (PSC) assesses the status of the system and directs BOTs by formalizing their objectives as per protocols; • PSC suggest objectives to the real player through messages in the UI, to drive the user toward compatible behaviors; • BOTs perform actions under PSC supervision based on the status of the system and of other agents;
Fig. 3 The hybrid agent-based model Ontology
224
D. Simeone et al.
• BOTs consider Avatars as similar to other BOTs; • avatars observe the status of the built environment and the behavior of the BOTs and make actions and decisions based on their human intelligence; • the built environment model is affected by the other components and indirectly influence their performing using its status. With the exemption of the school model that, as described in Sect. 2.2, was developed in Autodesk Revit environment and then imported, all these components have been programmed in the Unity 3D editor platform, relying on the extensive use of C# codes, Bolt visual programming and Navigation extensions.
5.2 Use Process Modeling The process/scenario controller can be considered the coordinator of the simulation during its performing, ensuring that BOTs and Avatars receive the right instructions per the testing scenario and the status of the system. As in previous works in the field [9, 9], the scenario is formalized as a workflow made of connected computational units, each with its gateways and necessary conditions, that correspond to specific moments (both discrete or prolonged) and status of the use phenomena. For instance, during the simulation of the children entrance at school, considered critical concerning COVID-19 risks because of the probability of crowds, the scenario controller is in charge of calling the different activities to be performed, from the initial instantiation of the children BOTs outside the school to the body temperature check, the hands sanitizing, the temporary waiting in specific points of the school foyer as per age and section, and the movement to the classroom as coordinated with the school managers and the teachers. While in previous works the narrative controllers were mainly in charge of directing BOTs, in the proposed hybrid system it also provides instructions to the avatars of the users, mainly through text in the U.I., to suggest following actions to be performed or objectives to pursue. Although the player is still free to freely respond to the instruction, we generally witness a general adherence to the process controller instructions. Within the Unity 3D platform, the use of Bolt add-in for visual programming was effective in the development of the process/scenario controller, since it allows to recreate also graphically the workflows of activities and the decision points during the simulation performing, as shown in Fig. 4.
5.3 ABM/Behavioral Simulation In the proposed system, the ABM paradigm is used to generate and simulate the behavior of the children as BOTs, steering their actions as per process controller
A Hybrid Agent-Based Model to Simulate …
225
Fig. 4 A part of the process/scenario controller developed using Bolt add-in to Unity 3D, with two events formalized
instructions. Beside of a simplified tridimensional representation, each agent is provided with a behavioral algorithm that consists of two parts: 1. 2.
a navigation system, that controls the movement of the agents within the model once a destination has been set; a decision-act system, that controls agents’ actions both during movement and static activities, such as auto-checking of temperature, hands washing, or changing destinations in accordance to specific occurrences (for instance a crowded door).
These two components preserve manageability of the simulation even in presence of many agents (in our case approximately 100 children) (Fig. 5), as well as their interactions with the avatars. Avatars were then developed to be perceived by the BOTs without any difference from other agents, to reduce the number of interactions and algorithms to be developed. Each agent has also a set of variables that, being updated during the simulation, represent its status as well as other parameters necessary to assess COVID-19 risks. Those variables include speed, distance from other agents, checked/not checked temperature, sanitized hands. In the development of the agents’ behavioral sets, we relied on the use of bolt scripts to progressively expand the set of behavior libraries and the number of formalized variables. This is particularly important in order to allow further calibration of the agent-based model as well as its customization as per different schools and protocols.
226
D. Simeone et al.
Fig. 5 The ABM while simulating the students approaching the waiting point near the stairs to the floor of the classrooms
5.4 Avatars/First-Person Experience Avatars, playable by children or other school operators, have been integrated into the hybrid model as active parts, able to interact with both the environment and other agents, while informed by the process controller regarding the objectives to achieve and the destinations to reach. In our first experiments we worked on a virtual reality experience with a simplified user interface that supports: (1) (2) (3)
3D navigation through the environment using keyboard controls; a text receiver (on the top left of the screen) where to visualize messages from the process controller; keyboard inputs in a simplified 2D interface in case of interaction with objects (i.e. the temperature detector).
Using this VR application, the users can recreate their experience within the school, contributing to realism of the simulation and, at the same time, obtaining useful information regarding the protocols defined by the school to reduce COVID-19 risks. Information is provided through a series of mini-games for children. Each game explains a specific rule to follow during the day at school (e.g. use of masks, social distancing, temperature check, hands sanitizing), including correct procedures and risks associated with non-compliance with them.
A Hybrid Agent-Based Model to Simulate …
227
6 Impact of Simulation on Post-Covid-19 School Processes Simulation-based approaches introduce the analysis of the building-user interaction as a mean to evaluate performance outcomes, such as the functional use of a building by its occupants. One of the benefits of such an approach is the improved communication among technical (e.g., designers) and non-technical (e.g., end-users) stakeholders, with the latter supported and effectively involved in the elaboration of their use processes’ requirements as well as in the evaluation of how well a proposed design (e.g., COVID-19-related school spaces’ reconfiguration) meets their needs of moving and interacting with the building according to the activities to be performed. Within this context, BIM-based pre-occupancy evaluation methods aim to facilitate the understanding of how end-users’ activities are accommodated in the building model [13], as well as to validate the compliance of design proposals and use processes against codes and regulations (e.g., COVID-19 protocols for school re-opening). Aligned with the research framework described, the paper evaluates how a hybrid multi-agent system could simulate people’s behavior in educational facilities and support decisions and strategies related to a safe school re-opening in COVID-19 scenarios. School re-opening represents a priority in managing the pandemic situation because of the demonstrated short and long term effects of remote learning, influencing negatively skills acquisition. For that reason, school managers and operators have been actively involved in re-thinking building use processes after the emergency to effectively support them, by the simulation-based approach, to calibrate their choices in accordance to the evolution of the pandemic, predicting and assessing the efficacy and the applicability of different school use scenarios. Moreover, the involvement of actual stakeholders in the development of the case study has also enhanced the adherence to the reality of the simulated occupancy, both improving the efficacy of the COVID-19 protocols and training students and operators before their actual entrance in the school.
7 Conclusions In this paper, we presented a hybrid agent-based model aimed at simulating human behavior in educational buildings, with the intent of supporting decisions regarding protocols with reduction of COVID-19 risks. The hybrid approach is made by the integration of pure agents (representing BOTs), a process/scenario controller and playable characters (avatars), obtained within a game engine environment. The adoption of this hybrid approach allows reaching a higher level of realism of the simulation—especially for building typologies that present a large variety of users’ behaviors-favoring the comprehension of the complexity of the use processes of a building.
228
D. Simeone et al.
From the testing activities carried out in the school case study, some advantages of the adoption of this approach have emerged: the opportunity of testing protocols and use scenario before the actual school re-opening improved management decisions and actions based on the virtual phenomena; by distributing the game to students and operators, they were made aware of the correct behaviors and activities to adopt, reducing the learning curve at the first school days. As described in the paper, the presented hybrid model is currently under development while its application to the case study is helping us to calibrate and objectively assess the correspondence to reality and its long-term impacts. Future developments shall include the opportunity of multi-player systems as well as the adoption of a cloud-based solution that, with an easy web interface, would allow more users to access the model. Another element to be further investigated is the scalability of the proposed system, that could potentially be distributed as an app to be used autonomously by managers, teachers, and designers. Acknowledgements The authors would like to acknowledge Francesco Ferrise and Giulia Wally Scurati from Politecnico di Milano, who oversaw the development of the interactive game for the involvement of end-users. Thanks are extended to the colleagues Giorgio Vassena, Lavinia C. Tagliabue, Silvia Costa from the University of Brescia for the valuable contribution to the project and to Oasys Software and CSPFea for technical support.
References 1. McKinsey: Reimagining the office and work life after COVID-19, https://www.mckinsey. com/business-functions/organization/our-insights/reimagining-the-office-and-work-life-afterCOVID-19. Accessed 15 Sep 2020 2. United Nations: Policy brief: education during COVID-19 and beyond (2020) 3. World Health Organization: Considerations for school-related public health measures in the context of COVID-19: annex to considerations in adjusting public health and social measures in the context of COVID-19, https://apps.who.int/iris/handle/10665/332052. Accessed 30 Aug 2021 4. Rinaldi, S., Flammini, A., Tagliabue, L.C., Ciribini, A.L.: C, An IoT framework for the assessment of indoor conditions and estimation of occupancy rates: results from a real case study. Acta Imeko 8(2), 70–79 (2019) 5. Di Pietro, G., Biagi, F., Dinis Mota Da Costa, P., Karpinski, Z., Mazza, J.: The likely impact of COVID-19 on education: reflections based on the existing literature and recent international datasets. https://ec.europa.eu/jrc/en/publication/likely-impact-COVID-19-education-ref lections-based-existing-literature-and-recent-international. Accessed 24 Nov 2020 6. Comai, S., Costa S., Mastrolembo Ventura, S., Vassena, G., Tagliabue, L.C., Simeone, D., Bertuzzi, E., Scurati, G.W., Ferrise, F., Ciribini, A.L.C.: Indoor mobile mapping system and crowd simulation to support school re-opening because of COVID-19: a case study. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLIV-3/W1–2020, 29–36 (2020). https://doi.org/ 10.5194/isprs-archives-XLIV-3-W1-2020-29-2020 7. Wurzer, G.: Schematic systems: constraining functions through processes (and vice versa). Int. J. Architectural Comput. 197–214 (2010)
A Hybrid Agent-Based Model to Simulate …
229
8. Tabak, V., de Vries, B.: Method for prediction of intermediate activities by office occupants. Build. Environ. Elsevier 45–6, 1366–1372 (2010) 9. Simeone, D., Kalay, Y.E.: An event-based model to simulate human behaviour in built environments. Proc. eCAADe 525–532 (2012) 10. Schaumann, D., Kalay, Y.E., Hong, S.W., Simeone, D.: Simulating human behavior in not-yet built environments by means of event-based narratives. Proc. SimAUD 7–14 (2015) 11. Marschall, M., Tahmasebi, F., Burry, J.: Including occupant behavior in building simulation: comparison of a deterministic vs. a stochastic approach. Proc. SimAUD 185–188 (2019) 12. Taylor, M.E., Carboni, N., Fachantidis, A., Vlahavas, I., Torrey, L.: Reinforcement learning agents providing advice in complex video games. Connect. Sci. 26(1), 45–63 (2014) 13. Shen, W., Shen, Q.: BIM-based user pre-occupancy evaluation method for supporting the designer-client communication in design stage. Proc. MISBE (2011)
Social Simulations—Theory and Applications
Generation Gaps: An Agent-Based Model of Opinion Shifts Among Cohorts Ivan Puga-Gonzalez and F. LeRon Shults
Abstract This paper presents the findings of an agent-based model of the shift toward liberal opinions over time within contemporary European populations. Empirical findings and theoretical reflection on this sort of shift suggest that cohort effects, and especially changes in the opinions of teenagers, are a primary driver of liberalization at the population level. We outline the core features and dynamics of the model and report on several optimization experiments that clarify the conditions under which—and the mechanisms by which—opinions become more liberal as agents interact with one another within and across cohorts. Keywords Opinion dynamics · Age effects · Agent-based modelling · Religiosity
1 Introduction In many contexts today, the dynamic flow of opinions seems to be shifting with cohorts rather than within individuals, i.e., opinions appear to change intergenerationally. Empirical evidence suggests this is the case for attitudes related to issues such as traditional gender roles [1, 2], LGBTQ rights [3, 4], and religious beliefs and behaviors [5, 6]. This phenomenon, where societal change occurs as a consequence of cohort replacement rather than changes during the lifetime of an individual, can be called demographic metabolism [7]. Obviously, societal changes may also be a consequence of age (A), and/or period (P), instead of cohort (C) effects. A growing number of scholars, however, are finding evidence that supports the claim that cohort effects are a dominant force in the shift of opinions and/or attitudes within societies [8, 9]. One of the best documented examples is the decline of religiosity among western European nations [10–13]. However, the conditions under which—and the mechanisms by which—such intergenerational changes occur remain elusive. It seems plausible that the answer has something to do with what happens during the teenage I. Puga-Gonzalez (B) · F. L. Shults University of Agder, Kristiansand, Norway Center for Modeling Social Systems at NORCE, Kristiansand, Norway © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_18
233
234
I. Puga-Gonzalez and F. L. Shults
years, which recent psychological experiments suggest are a period of life during which individuals are more easily influenced by others [14, 15]. Here we use an agent-based model (ABM) to investigate how mechanisms related to age may drive intergenerational changes in opinion. The model and simulation experiments we outline and report on below are designed to explore the mechanisms by which opinions shift among cohorts in a population spanning 300 years. Although our model does not explain all the relevant factors in such shifts, it does provide an empirically informed, theoretically inspired, and relatively realistic artificial society with a causal architecture that enables scholars to explore these factors and conditions with more precision. We are explicitly attempting to respond to the concerns identified by Flache et al. [16] regarding the relative lack of empirical validation in most opinion dynamics models. Our goal is to provide an ABM that can simulate the emergence of population level changes among cohorts that are observed in the real world. The realism of the model is strengthened by the inclusion of reproduction and mortality rates informed by UN census data. Hence, in our model agents reproduce, age, and die at rates that are like those of human populations. Further, as explained below, we optimize the model against empirical findings from research on shifts in religious opinions in the European Social Survey. We used shift in religious opinions as an example because it is a very well documented phenomenon about which we have good data in relation to which we can optimize the model parameters. Our artificial society thus mimics the intergenerational opinion changes documented in real human societies.
2 Methods 2.1 The Model The model was written in AnyLogic v.8.7.3. Our approach involves using a basic opinion dynamics model of positive and negative influence on top of which we build mechanisms related to age effects, as explained below. Agents. The artificial society represented in the model is inhabited by individual human agents who have an opinion value (range [0,1]), an age, and belong to a specific five-year cohort or generation (calculated according to the year of birth). On initialization, 1000 adult agents (age 0–100) are created. The initial opinions of agents are drawn from a normal distribution N (μ = 0.99, σ = 0.005). The agents’ age distribution follows a typical pyramid shape. Every year (52 weeks) agents age by one year, and die or give birth with a probability according to their age (agents give birth only between ages 15–49). Birth and mortality rates, and initial age distribution, come from UN census data. For simplicity, we assume asexual reproduction and on average agents give birth to ~1.02 agents, so the population size remains stable. Note, however, that mortality and reproduction are stochastic in the model, thus we expect some degree of variation in the number of offspring each agent has. Every two weeks
Generation Gaps: An Agent-Based Model of Opinion Shifts Among Cohorts
235
agents that are 12 years old or older hold a dyadic social interaction with another randomly selected agent (age > = 12). The social interaction may affect the opinion value of the agent in a positive, negative, or neutral way (see below). Bias inheritance of opinion values. Newborns inherit opinion values from their parents with some bias, i.e., parent’s value * bias, where bias is a random value drawn from a Weibull distribution. The Weibull distribution is truncated at [0,1] values, and its scale and shape parameters were optimized (see optimization experiments). Agents may thus inherit the same opinion value of their parent or a value somewhat lower depending on the shape and scale of the distribution. The rationale behind this decision is informed by research indicating the inheritability of religiosity [17, 18]. Social interactions. Social interactions can influence agents’ opinion values in three different ways; positive, negative, or neutral. Here we are adapting previous studies of positive and negative influence [16, 19, 20]. Positive influence. The opinion value of the agent (Ego) moves in the direction of the interaction partner’s opinion if the partner’s opinion is within the positive confidence threshold (Fig. 1). The update of Ego’s opinion value is then given by Eq. (1): Ego_O pi t+1 = Ego_O pi t + Par tner _O pi t − Ego_O pi t ∗ Pos_ Age_I mpact (1) where the age impact is a value between [0,0.5] that is modulated by the age of Ego (see age effects). Note that the interaction is unidirectional, i.e., Ego is the only one potentially changing its opinion value; the partner does not get this chance. Negative influence. The opinion value of the agent moves in the opposite direction of the partner’s opinion if the partner’s opinion is outside the negative confidence interval (Fig. 2). The update of Ego’s opinion depends on whether the absolute difference between Ego’s opinion and that of its partner is larger than the negative confidence interval. This is given by Eqs. 2 and 3.
Fig. 1 Positive social interaction
236
I. Puga-Gonzalez and F. L. Shults
Fig. 2 Negative social interaction
If Ego opinion > partner opinion: Ego_O pi t+1 = Ego_O pi t + 1 − Ego_O pi t ∗ N eg_Age_I mpact
(2)
If Ego opinion < partner opinion Ego_O pi t+1 = Ego_O pi t + Ego_O pi t ∗ N eg_Age_I mpact
(3)
where the negative age impact is a value between [0,0.5] that is modulated by the age of Ego (see age effects). Note that the more extreme the opinion of Ego the lower the change after the social interaction. As in the positive interactions, negative interactions are unidirectional; partner opinions do not change due to the interaction. Neutral influence. The opinion value of the agent remains the same if the partner’s opinion is neither within the positive influence interval nor outside the negative confidence interval (Fig. 3). Age effects. Age effects act on the value of the positive and negative impact of a social interaction (Eqs. 1–3). Both the positive and negative impact of social interactions decrease with age. For positive interactions this means that as agents get older they will be less impacted by (more reluctant to adopt) other’s opinion; when they are young, they are more impacted by others. For negative interactions this means that as agents get older they become more tolerant and less repulsed by opinions different than their own; when they are young they are more easily repulsed by others’ opinions. The decrease of the impact value occurs in a linear or nonlinear way according to Eq. 4.
Fig. 3 Neutral social interaction
Generation Gaps: An Agent-Based Model of Opinion Shifts Among Cohorts
237
γ1
Fig. 4 Impact values according to the agent’s age and values of γ
Age_I mpact = Max_I mpact ∗ 1 −
Age Max_Age
γ (4)
where Max_Impact is the maximum possible value of the impact, age is the age of the agent, and Max_Age is the maximum age agents can achieve, i.e., 100. Hence, depending on the value of gamma (γ), the decrease of the impact value can be linear (γ = 1); or nonlinear (Fig. 4). Note that when age is 12, the value of impact is maximum.
2.2 Empirical Data As noted above, our goal is to link this exploratory model to empirical data. We have selected an influential study by Voas [13] that demonstrates the change among cohorts in opinions related to the shift from religious to secular societies across several countries in Europe. This is a phenomenon that has been documented and well-studied by scholars over the last few decades [21]. Voas documents the decline of religiosity and provides a model producing an s-shape trajectory of the decline of religiosity over time, from a very religious country at year 0 to a very secular one around year 200. Given that in the model the maximum opinion value is 1 and the minimum is 0 and that 200 years corresponds to ~40 five-year cohorts, we converted the trajectory provided by Voas to values according to five-year cohorts in the x-axis and used this trajectory as a target trajectory against which to optimize the parameters of the model (Fig. 5).
238
I. Puga-Gonzalez and F. L. Shults
Fig. 5 Decline of religiosity
2.3 Optimization, Simulations and Parameters Variation We ran optimization experiments to find combinations of parameter values (Table 1) that could lead to a decrease in opinions (religiosity) among cohorts in a similar fashion to the decline in religiosity found by Voas in the European Social Survey [13]. We used the optimization engine of AnyLogic, which allows the user to obtain a combination of values that increases or decreases a specific output value obtained from an input function. In our case, the input function calculated the residual sum of squares (RSS) between the model cohort values and the cohort values of the decline of religiosity (Fig. 6). The optimization experiments found the combination of parameters that minimize the output value (RSS). We ran a total of five optimization Table 1 Parameters optimized Parameter
Description
Potential values
Bias inheritance (Weibull distribution) Shape
The shape parameter of the distribution
[0.1–2.0]
Scale
The scale parameter of the distribution
[0.01–1.0]
Positive interactions Max opinion difference Determines size of the interval of attraction (Fig. 1)
[0.005,0.1]
Max impact value
Maximum impact of interactions (Eq. 1)
[0.05,0.5]
γ Age impact
Age Impact modulator when older partner (Eq. 4)
[0,100]
Negative interactions Min opinion difference
Determines size of the interval for repulsion (Fig. 2) [0.05,0.8]
Max impact value
Maximum impact of interactions (Eq. 2–3)
[0.05,0.5]
γ Age impact
Age impact modulator when older partner (Eq. 4)
[0,100]
Generation Gaps: An Agent-Based Model of Opinion Shifts Among Cohorts
239
Fig. 6 Average cohort value in the model (red) in comparison with empirical decay (black). The x-axis represents cohort number (the farther to the left the older the cohort). The y-axis represents the average opinion value per cohort
experiments from which we obtained five different combinations of optimized values (Table 2). Table 2 Optimized parameters values. Values from five optimization experiments. RSS = residual sum of squares Parameter
Median value
Min–Max
RSS Median: 0.263 Max–Min: [0.221–0.312]
Bias inheritance Shape parameter
1.259
[0.100–1.974]
Scale parameter
0.127
[0.010–0.795]
Max opinion difference
0.036
[0.026–0.039]
Max impact value
0.402
[0.396–0.478]
Positive influence
γ Age impact
44.902
[39.414–57.970]
Negative influence Min opinion difference
0.708
[0.666–0.722]
Max impact value
0.331
[0.321–0.344]
γ Age impact
2.269
[0.000–3.424]
240
I. Puga-Gonzalez and F. L. Shults
Simulations were run for 300 years. We did this because cohort number 40 would have only been born at year 200 and we wanted to allow for agents in that last cohort to alter opinions during their whole life span. This required us to let the model run for 300 years. Each year consists of 52 weeks and agents have a random social interaction every two weeks. During the simulation we collected the average opinion of each five-year cohort (agents were grouped from cohort 0 to 40 according to their year of birth) and used this value to calculate the RSS at the end of the simulation. The parameters that were optimized are shown in Table 1. We constrained the potential range of values that each parameter could have.
3 Results 3.1 Decrease in Opinion Among Cohorts Figure 6 shows the decrease in the average value of opinions among cohorts in the model (red) and demonstrated in empirical data (black). The model decrease fit the empirical data moderately (RSS in Table 2 below). Nevertheless, average opinion (conservative religiosity) in the model does decrease with time and appears to differ among cohorts. More interestingly, later cohorts also appear to have a lower opinion value (become more liberal) than earlier cohorts (Fig. 6).
3.2 Bias Inheritance of Opinions Results of the five optimization experiments are shown in Table 2. The optimized values of the Weibull distribution suggest that the inheritance of opinions follows a skewed distribution (Fig. 7). Most agents (~80%) inherit an opinion value that is 80–100% equal to that of their parents; only a minority (~6%) inherit opinion values that are half or less than half the value of their parents (Fig. 7).
3.3 Confidence Intervals (CI): Positive and Negative The experiments show that in all cases the maximum opinion difference for positive influence was much lower than the minimum opinion difference for negative influence (Table 2). In other words, agents were positively influenced only by others that had a very similar opinion and negatively influenced only by others that had a very different opinion than theirs; and the zone of neutrality, where agents are neither attracted nor repelled by other’s opinion, was large (Fig. 3). Hence, to be repulsed by others’ opinions, interacting agents must have extreme opinions.
Generation Gaps: An Agent-Based Model of Opinion Shifts Among Cohorts
241
Fig. 7 Distribution of bias values. Agents inherit an opinion value equal to their parent’s value*bias
3.4 Effect of Age on the Positive and Negative Impact of Interactions In all five simulation experiments, the impact of positive interactions appears higher than that of negative interactions (Table 2). However, the impact of positive interactions decreases much faster with age than the impact of negative interactions. In fact, positive interactions stop having a significant influence in agents’ opinions (impact value = 0.7, Table 2). It seems, therefore, that once agents reach adulthood (~20–25 years old), their opinions do not change unless they encounter agents with extreme opposite opinions. In the model, the inheritance of biased opinion values transmitted from parents to offspring is a necessary process for the emergence of the intergenerational change in opinions. When in our simulations bias inheritance values are drawn from a normal distribution (μ = 1, σ = [0.164–0.2]) rather than from a Weibull distribution, the fit between the empirical data and the model’s results is worst (median RSS value = 0.635). This suggests that the simulation requires that least a small percentage of agents (~6%) inherit an opinion value that is half (or lower) than that of their parent. This inheritance process produces a population of agents whose opinions are at the extreme of the continuum. The presence of agents with extreme opinions seems necessary to start the process of opinion change among cohorts. Note, however, that this inheritance process alone (i.e., inheritance of biased opinion values without age effects and social interactions) is not enough to produce intergenerational changes. Simulation runs that only include this inheritance process show a worse fit than simulations with this inheritance process plus age effects and social interactions (median RSS with no age effects and social interactions = 0.359). Furthermore, running the model without social interactions is somewhat unrealistic since it is well known that people’s opinions are readily influenced by others. It is also important to note that the biased inheritance of opinion values can be seen as an abstraction of additional social forces that are not explicitly modeled (e.g., the influence of role models [22]). Social interactions occur randomly in the model. Every agent has the same possibility of meeting any other agent. Networks are thus not represented in the model. However, we do not think that the lack of a network structure has a major effect on the model’s results. From literature, we know that social networks are usually comprised of others with similar opinions [23]. This is the type of social network we would expect to emerge in the model if we were to quantify and link agents that have positive interactions among each other. This is because the confidence interval for positive interactions is small and thus the networks of agents that positively influence each other’s opinions must be comprised of agents with homophilous opinions. Further, if we were to constrain agents into opinion homophily networks, we would be missing interactions among agents with extreme opinions and thus preclude the effect of negative social interactions on the agents’ opinions. The model presented here was designed with the goal of exploring potential mechanisms underlying intergenerational changes in opinions in populations spanning over 300 years. In particular, motivated by empirical findings in social learning [13, 14], we were interested in testing whether mechanisms related to age effects could give rise to the intergenerational decay in religiosity, a phenomenon observed across many European countries. Our findings suggest that age effects, particularly during teenage, may be behind the observed shifts in religious level among cohorts. Indeed,
Generation Gaps: An Agent-Based Model of Opinion Shifts Among Cohorts
243
several studies on secularization suggest that religious socialization during the formative years is a pivotal time determining whether religious beliefs are acquired and maintained during adulthood [24–26]. Further, our results also raise other interesting questions. For instance, can these teenage-related mechanisms be generalized to other contexts or beliefs? In the opinion dynamics literature, researchers usually classify beliefs in two categories: subjective and objective beliefs. Subjective beliefs, such as religion or politics, usually elicit strong convictions and/or emotions. Objective beliefs on the other hand are governed neither by convictions nor emotions. If the (teen)age effects suggested by our results are a general mechanism for the acquisition, change, and maintenance of beliefs, we should expect similar patterns of intergenerational change in beliefs whether they are subjective or objective. However, the intergenerational change in beliefs observed in human societies usually occurs in moral and political subjects, i.e., subjective beliefs. Hence, the mechanisms behind the acquisition, change, and maintenance of objective beliefs may be different from the ones here suggested. When it comes to subjective beliefs (moral or political values), these may be adopted at an early age and become difficult to change in adulthood; when it comes to objective beliefs, other mechanisms underlying learning and change of opinions may be at play. In the model, the religious opinion value is a continuous variable ranging between 0 and 1, meaning that there are preestablished maxima and minima. Without these limits, agents with extremely low religiosity may become more and more radical as long as they keep meeting others with extremely high religiosity; i.e., parts of the population would polarize. For polarization to happen, however, some agents need to escape the pull of the population towards increasingly lower religiosity. Once agents with a low enough religiosity start to emerge, there would be a self-sustaining mutual repulsion of low and high religiosity agents (given that they interact). In such cases, enclaves of high religiosity agents may then remain in the population. In sum, our results do not show a perfect fit between the model and empirical data. The fit could be considered moderate. This suggests that other factors are likely playing a role in the way individuals acquire and change their opinions. Future work might involve the integration of aspects of the current model with aspects of other ABMs of secularization processes that have more complex cognitive architectures [27–30]. Nevertheless, our model was designed to explicitly test these age-related mechanisms, leaving out several other potential processes such as social networks (friends, family, acquittances, neighborhood, job, etc.), spatially explicit interactions, influence of role models or prestigious individuals, different types of social learning strategies, and individuals’ personality. Exploring all these mechanisms at the same time would have made the model more complex and thus more difficult to understand. Nevertheless, our results add plausibility to the claim that (teen)age effects are an important mechanism in intergenerational changes in beliefs. We hope our work will motivate further exploration of this important societal phenomenon.
244
I. Puga-Gonzalez and F. L. Shults
References 1. Brooks, C., Bolzendahl, C.: The transformation of US gender role attitudes: cohort replacement, social-structural change, and ideological learning. Soc. Sci. Res. 33(1), 106–133 (2004) 2. Bolzendahl, C.I., Myers, D.J.: Feminist attitudes and support for gender equality: opinion change in women and men, 1974–1998. Soc. Forces 83(2), 759–789 (2004) 3. Andersen, R., Fetner, T.: Cohort differences in tolerance of homosexuality: attitudinal change in Canada and the United States, 1981–2000. Public Opin. Q. 72(2), 311–330 (2008) 4. Lewis, G.B., Gossett, C.W.: Changing public opinion on same-sex marriage: the case of California. Politics Policy 36(1), 4–30 (2008) 5. Hamberg, E.M.: Stability and change in religious beliefs, practice, and attitudes: a Swedish panel study. J. Sci. Study. Relig. 63–80 (1991) 6. Crockett, A., Voas, D.: Generations of decline: religious change in 20th-century Britain. J. Sci. Study Relig. 45(4), 567–584 (2006) 7. Lutz, W.: Demographic metabolism: a predictive theory of socioeconomic change. Popul. Dev. Rev. 38, 283–301 (2013) 8. The Economist: Societies change their minds faster than people do. Economist (2019). https://www.economist.com/graphic-detail/2019/10/31/societies-change-their-mindsfaster-than-people-do. Accessed 21 Aug 2021. 9. Striessnig, E., Lutz, W.: Demographic strengthening of European identity. Popul. Dev. Rev. 42(2), 305 (2016) 10. Twenge, J.M., et al.: Generational and time period differences in American adolescents’ religious orientation, 1966–2014. PLoS ONE 10(5), 1–17 (2015) 11. Funk, C., Smith, G.: Nones” on the rise: one-in-five adults have no religious affiliation. DC, Pew Research Center, Washington (2012) 12. Voas, D., Bruce, S.: Secularization in Europe: an analysis of inter-generational religious change. In: Value contrasts and consensus in present-day Europe, Arts, W., Halman, L. (eds.) Leiden: Brill (2014) 13. Voas, D.: The rise and fall of fuzzy fidelity in Europe. Eur. Sociol. Rev. 25(2), 155–168 (2009). https://doi.org/10.1093/esr/jcn044 14. Molleman, L., Ciranka, S., van den Bos, W.: Social influence in adolescence as a double-edged sword (2021) 15. Molleman, L., Kanngiesser, P., van den Bos, W.: Social information use in adolescents: the impact of adults, peers and household composition. PloS one 14(11), e0225498 (2019) 16. Flache, A., et al.: Models of social influence: towards the next frontiers. J. Artif. Soc. Soc. Simul. 20(4), 1460–7425 (2017). https://doi.org/10.18564/jasss.3521 17. Ganzach, Y., Gotlibovski, C.: Intelligence and religiosity: within families and over time. Intelligence 41(5), 546–552 (2013). https://doi.org/10.1016/j.intell.2013.07.003 18. Ellis, L., Hoskin, A. W., Dutton, E., Nyborg, H.: The future of secularism: a biologically informed theory supplemented with cross-cultural evidence. Evol. Psychol. Sci. 1–19 (2017) 19. Deffuant, G., Neau, D., Amblard, F., Weisbuch, G.: Mixing beliefs among interacting agents. Adv. Complex Syst. 3, 11 (2001) 20. Flache, A., Macy, M.W.: Small worlds and cultural polarization. J. Math. Soc. 35(1–3), 146–176 (2011) 21. Stolz, J.: Secularization theories in the twenty-first century: ideas, evidence, and problems. Presidential address. Soc. Compass 67(2), 282–308 (2020) 22. Moussaïd, M., Kämmer, J. E., Analytis, P. P., Neth, H.: Social influence and the collective dynamics of opinion formation. PloS one 811, e78433 (2013) 23. McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: homophily in social networks. Ann. Rev. Sociol. 27(1), 415–444 (2001) 24. Gregory, J. P., Greenway, T. S.: Is there a window of opportunity for religiosity? children and adolescents preferentially recall religious-type cultural representations, but older adults do not. Relig. Brain Behav. 1–19 (2016). https://doi.org/10.1080/2153599X.2016.1196234
Generation Gaps: An Agent-Based Model of Opinion Shifts Among Cohorts
245
25. Martin, T.F., White, J.M., Perlman, D.: Religious socialization: a test of the channeling hypothesis of parental influence on adolescent faith maturity. J. Adolesc. Res. 18(2), 169–187 (2003). https://doi.org/10.1177/0743558402250349 26. Mikoski, C., Olson, D. V.: “Does Religious Group Population Share Affect the Religiosity of the Next Generation?,” J. Sci. Study Relig. (2021) 27. Gore, R., Lemos, C., Shults, F.L., Wildman, W.J.: Forecasting changes in religiosity and existential security with an agent-based model. J. Artif. Soc. Soc. Simul. 21, 1–31 (2018) 28. Shults, F.L., Gore, R., Lemos, C., Wildman, W.J.: Why do the godless prosper? Modeling the cognitive and coalitional mechanisms that promote atheism. Psychol. Relig. Spiritual. 10(3), 218–228 (2018) 29. Wildman, W. J., Shults, F. L., Diallo, S. Y., Gore, R., Lane, J. E.: Post-supernaturalist cultures: there and back again. Secularism Nonreligion (2020) 30. Cragun, R., McCaffree, K., Puga-Gonzalez, I., Wildman, W., Shults, F. L.: Religious exiting and social networks: computer simulations of religious/secular pluralism. Secularism Nonreligion 10(1), (2021)
Comparison of Viral Information Spreading Strategies in Social Media Sri Sailesh Meegada and Subu Kandaswamy
Abstract Influencing the opinion of people has always been a challenging problem since ages. Businesses and brands thrive on the basis of maintaining their image in people’s minds. With the advent of social media, it has been easier than ever to reach people directly, and marketers have been quick to jump on this bandwagon. However, recently this technology is also playing a pivotal role in influencing the opinions and attitude of people, owing to its viral effects. In this paper, we use an Agent Based Model to study the change in peoples’ attitude and opinion when treated with different information spreading strategies. We found that in a social network using a single attribute like the number of friends or the betweenness of the individuals, results in comparable performance. We also found that when those attributes such as degree and betweenness are combined with other attributes such as the engagement a user gets on their posts, there is a significant difference in their performance. In addition to this, We also measure the impact of social media platforms trying to maximize their click-through rate by showing content which aligns with the user, and found that it significantly influences the formation of echo chambers. Keywords Viral information · Social media · Seeding strategy · Echo chambers
1 Introduction In recent times, social media has changed how information flows through society and it has been a game changer for the advertising industry. They use several tactics to help disseminate their information in the network, for example, using high potential “seed” users. Seed users are those individuals who are introduced to the information before anyone else. Social media is also being used as a tool for influencing the public opinion. In major world events like Brexit, the role of social media has been discussed and analyzed by academia, news and political debates. While some say S. S. Meegada (B) · S. Kandaswamy Indian Institute of Information Technology Sri City, Sri City, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_19
247
248
S. S. Meegada and S. Kandaswamy
social media has brought people together and the internet gives a chance for people to see genuine unbiased content [10], many critics have discussed about how social media technology can be used for potentially malicious intent [9]. Apart from this, there also has been a lot of debate on new age social media, and it’s tendency to form filter bubbles [11], which is a phenomenon arising from the mix of human and technological processes. This results in the individual only being exposed to a tailored selection of information which fits their pre-existing attitudes. Seeding is a process in which information is introduced to a select target audience with the intention of making it spread virally, which is vital for advertisers and other such organizations. Studies have explored analyzing social media networks using graph theory [3, 8]. In this paper, we use Agent Based Modelling. ABM involves defining the rules and activities for individual agents, and observing the properties which emerge from their interactions. Here, we use an Agent Based Model based on Geschke et al. [5] to analyse the effect of seeding strategies based on attributes such as degree, betweenness and user engagement, on such new age social media, and how it affects the spread of information and influences the ideologies of people. We also introduce a viral information spreading model, based on Geschke et al. [5] Triple Filter model.
2 Background and Previous Work Studies have analyzed how seeding strategies affect product adoption and other factors in the domain and context of viral marketing [6, 12], but do not account for opinion dynamics or the effects of social media. Studies on opinion dynamics and information-spread usually employ a disease spreading model [13], but the effects of modern features of social media, like recommender systems, and the echo chambers and filter bubbles which emerge in them are not considered. Some use network optimization approaches to study these kinds of problems [3, 8]. There are also a few studies which deal with information diffusion, with the agents’ personality traits as the main focus [7]. Geschke et al. [5] explores social media’s echo chamber and filter bubbles in great detail, but doesn’t explore any seeding strategies. They developed an ABM to study the echo chambers and filter bubble effect. They came up with a Triple-filter model, where the filtering of information happens at 3 levels: at an individual level, at a social level and at a technological level. They investigated the role of social media platforms on the formation of echo chambers and polarization of people’s opinions. They found that when modern social media phenomenon such as recommender systems are used, the opinions of people tend to get more polarized.
3 The Viral Information Spreading Model Here we introduce a viral information spreading model, based on Geschke et al. [5] Triple Filter model. The model shows the attitudes of agents, and news items as a point in 2-dimensional space and changes in attitude can be measured easily (Fig. 1).
Comparison of Viral Information Spreading Strategies in Social Media
249
Fig. 1 A screenshot of the model in NetLogo improved upon the triple filter model
Fig. 2 A diagram of the attitude space showing the different types of links and agents
The model consists of a 2-dimensional attitude space, shown in Fig. 2, on which the agents exist. The agents are connected to each other through friend links, which form a social network. The pieces of information are also present in the same attitude space and are connected to the agents through information links. The probability of an agent connecting to information is dependent on the ideological distance between them, and also the level of acceptance the agent has. The agents will connect to the information with a probability given by Eq. 1. The information gets filtered at 3 different levels: 1. At an individual level, which is a manifestation of the person’s own confirmation bias in accepting a certain piece of information. This is modelled as a behavior of the agent where the individual accepting a piece of information is a probabilistic event, governed by Eq. 1.
250
S. S. Meegada and S. Kandaswamy
2. At a social level, where individuals share information with their friends who also hold similar ideologies. This is a consequence of the sharing of information between friends through the social media network. 3. At a technological level, where the social media platform’s recommender systems recommend information which is similar to the user’s attitude. This is modelled as the behavior of the system where the individual is shown information which is close to them in the attitude space. The social network in the model acts as a social filter. The source of new information acts as a technological filter, and the acceptance probability of the agents, acts as an individual filter. We adapted this base model designed by Geschke et al. [5] and built upon it. The model provides an interface to control all the parameters as shown in Fig. 1.
3.1 Attitude Space The attitude space is a two dimensional space, which represents the attitudes or opinions of individuals and pieces of information as points in space. A single dimension represents the range of attitudes on a certain topic (e.g. political right vs. left). The attitude values range from −1 to +1 [5]. Individuals’ behavior, as mentioned in the next subsection, is encapsulated by Agents and similarly, news items’ properties are encapsulated by infobits in the attitude space. The position of an information bit or individual represents its attitudinal content on the two dimensions.
3.2 Agent Activities and Links The simulation proceeds in discrete time steps. At each time step, new infobits are introduced to the system at random locations, and the agents try to integrate the information. A successful integration of the infobit implies that the agent has accepted the information. The integration of the information is a probabilistic event, governed by the following equation. P(d, D, δ) =
Dδ d δ + Dδ
(1)
Where d is the attitude distance, which shows how far the agent is from the infobit. D is the latitude of acceptance, which specifies how willing the agent is in accepting farther infobits. δ is a sharpness parameter which specifies how sharply the probability drops. By integrating the information with themselves, the agents form an infolink with the corresponding infobit. This process is shown below in Algorithm 1.
Comparison of Viral Information Spreading Strategies in Social Media
251
Algorithm 1: Agent’s procedure of finding infobits and creating new infobits begin f itting − in f obits ← all infobits within my latitude of acceptance and don’t have an infolink with me; if count of fitting-infobits == 0 then create a new infobit; try to integrate new infobit; else select one of fitting infobits; try to integrate the infobit; end end Algorithm 2: Agent’s procedure of posting an infobit to its friends begin if count of my infolinks > 0 then select one of my connected infobits; ask my friends to try to integrate selected infobit; end end Each agent also has some number of friends, with whom they share a link. The agents also post one of the infobits they are connected to which is selected randomly, to a friend, and the friend now tries to integrate the infobit as shown in Algorithm 2. Each agent can only form a certain number of infolinks. The memory capacity of an agent limits the maximum number of infolinks that it can form. When an agent needs to form a new infolink, and if it’s memory is full, one of its existing info links will be removed randomly to make room for a new infolink. If an infobit is not having infolinks with any agent, the infobit is removed. The new position (attitude) of an agent, is the average of all the positions of the infobits it holds in the memory. This is shown in Algorithm 3. Algorithm 3: Agent’s procedure to integrate new infobits and manage their memory begin if random(0,1) < integration-probability then if count of my infolinks > memory then ask one of my connected infobits to die; end create infolink with new infobit; set position = mean of positions of connected infobits; end end
252
S. S. Meegada and S. Kandaswamy
The subsequent sections are additions we’ve made to the Geschke et al. [5] model to simulate viral information spread.
3.3 Seeding Infobits Analogous to specially created messages designed to be used in an advertising campaign, we have special infobits to seed the agents. These are not randomly spawned. They are created and spawned at the location where the seeder wants to influence the agents to, which is the target attitude. In the real world these are analogous to targeted advertisement campaigns. These are initially only introduced to the seeds (a chosen few agents whose attributes are desirable).
3.4 Networks We used two types of networks for modelling the social network: 1. Small World Network (SW): The network is generated by starting with a lattice network, and randomly rewiring some of the edges. This produces graphs with small world properties such as short average path lengths and high clustering [14]. This type of network was also used in Geschke et al. [5]. 2. Preferential Attachment (PA): The network is generated with a preferential attachment mechanism. Nodes are incrementally added to the network and connect in a way that is preferentially biased toward individuals who already have many connections. This is similar to how new users in social media follow the users who are already popular [1].
3.5 Target Attitude The goal of each seeding strategies is to influence the agents to migrate towards a specific target point in the attitude space, which can be seen as changing their existing opinion to a target opinion. The average distance of each agent from the target attitude point is taken as a measure. The average target distance S is given by N S=
i=1
N
di
(2)
Where di is the distance from ith agent to the target point, and N is the total number of agents. Apart from that, we also consider the number of time steps it takes for the model to reach a sufficiently stable state. For this, we use average number of agents
Comparison of Viral Information Spreading Strategies in Social Media
253
in a specified radius of each agent, or the agent density around an agent. This is a measure of the closeness of the agents in their respective cluster. When the average agent density of the agents crosses a certain threshold, we can consider the model to have reached it final state.
3.6 Seeding Strategies A number of seeding Strategies have been proposed and tested in the field of product adoption [6, 12], which aim to maximize attributes such as Net Present Value or other similar measures. Most of them revolve around degree and betweenness. In our case, we are interested in measuring the change in people’s opinions, and are considering the following seeding strategies. High Degree: The number of friend links an agent has is called as it’s degree. The higher the number of links, the higher the degree. Agents with higher degrees are seeded with the information. We call it High Degree Seeding (HD). In the real world, this is analogous to the count of friends or followers a person has on their account. High Betweenness: Betweenness Centrality of an agent is the measure of the number of the number of shortest paths which pass through it. The agents with higher betweenness are seeded with the information. We call it high betweenness seeding (HB). In the real world, this is similar to the people who are well connected to many different groups serving as a common link between them. User Engagement: We propose that User Engagement would be a valuable attribute to consider while seeding. In the real world, user engagement would be the amount of likes or comments a user receives on their posts. In our model, this is calculated as the amount of infobits posted by an agent, which are accepted by the receiver. Hence we propose that the agents with higher user engagement should be seeded with the information. We call this high user engagement seeding (UE). Hybrid Strategies: Apart from the individual attributes, we propose that combinations of the attributes to create hybrid seeding strategies would perform better. We combine HD and UE to create a combination of high degree and user engagement, which we call as HDUE. We combine HB and UE to create a combination of high betweenness and user engagement, which we call as HBUE. Bonacich Centrality: According to the degree centrality approach having more connections implies having more power. But, having the same degree does not necessarily make users equally important. The Bonacich centrality approach goes beyond the degree approach and also accounts for the connections in the immediate neighborhood of the agent [2]. Fewer connections of actors in your neighborhood indicates power. The agents with higher Bonacich centrality are seeded with the information. Sequential Seeding: All of the previous seeding strategies had a common feature, that the information which was being seeded was of a constant attitudinal value. We
254
S. S. Meegada and S. Kandaswamy
propose that starting with a more central stance, and then gradually increasing the polarity of the information would be better. We start introducing information at the origin of the attitude space (Sect. 3.1), and gradually shift the position to the desired attitude value. New information is continually introduced throughout this shift. The position at any time is given by P(t) = min
Tt ,T s
(3)
Where P(t) is the position at time step t, T is the target position, and s is the number of time steps to reach the target position (Inverse of speed). From our initial experiment, we found that high degree, high betweenness and high user engagement are comparable in performance. Hence this strategy uses High Degree as it’s base. That is, the new information is seeded to the agents according to the HD criteria. Maximizing Click-through: Apart from the seeding strategies, we also studied the effect of social media platforms trying to maximize the time users spend on their sites. With this process, we don’t intend to influence the users to a specific attitude, but rather, it is an action of the social media platform which keeps recommending content which it thinks the user will like, to keep them engaged on the platform for as long as possible. To measure the effect of such a system, we use a metric called mean infosharer distance originally used in Geschke et al. [5]. The mean infosharer distance is the average attitude distance between agents who have at least one common infobit in their memory.
4 Experiments and Results The model explained in the previous section is implemented in Netlogo as an agent based model. To study the different strategies, first, a number of agents are created in the attitude space, and connected according to the chosen network topologies. Then the strategy currently being investigated is taken, and the agents are ranked according to the strategy. Then a proportion of the agents at the top of the rank list are selected, and seeded with the information. Algorithm 4: Procedure to select agents and seeding them with infobits Data: num − seeded, the number of agents to be seeded begin seeded − guys ← top num − seeded number of agents sorted by weight; ask seeded − guys to create new seeding infobit; ask seeded − guys to try integrating new seeding infobit; end
Comparison of Viral Information Spreading Strategies in Social Media Table 1 Parameters and their values used in the simulation Parameter Value(s) Parameter Number of agents Acceptance latitude Acceptance sharpness Memory
500 0.3 20 20
World’s x-coordinate range World’s y-coordinate range
−16 to 16 −16 to 16
Average number of friends Proportion of agents seeded Seeding target co-ordinates Rewiring probability for SW network Average agent density radius Average agent density threshold
255
Value(s) 20 10% (0, 8) 0.3 0.2 10
For example: If the strategy chosen is high degree, then all the agents will be ranked according their degree, and the top 10% are seeded. This seeding process is shown in Algorithm 4. The ranking and seeding happens at every time step. This process is continued until the average agent density around the agents crosses the desired threshold, as mentioned in Sect. 3.5. This whole process is counted as one run. Once a run is complete the average target distance and the time taken are recorded. All of the parameters and their values used are tabulated in Table 1. Experiment 1: We compare the plain seeding strategies, which are high degree (HD), high betweenness (HB), and high user engagement (UE), in PA and SW networks. We use the parameters as mentioned in Table 1, and compare the time taken and average target distance for 25 runs. The recorded runs show that the high degree, high betweenness, and user engagement methods are comparable to each other, both in terms of average target distance, and time taken. The difference between them was not statistically significant in both the networks. This is in line with the observations in Hinz et al. [6], which also found that high degree and high betweenness strategies to be comparable to each other. This implies that people looking to seed their information can simply use the individuals with a lot of connections instead of trying to find the central or bridge individuals, which is very difficult and requires data about the entire network. Experiment 2: Now, from experiment 1 we know that the plain strategies by themselves are comparable to each other in performance. Hence we wanted to see if the combination of the strategies would make any difference in performance. Namely, we combine the user engagement score with the degree (HDUE), and also the user engagement score with the betweenness of the agent (HBUE). This combination is done in a linear fashion to get a score for ranking the agents: wi = au(i) + (1 − a)x(i)
(4)
where u(i) is the normalized user engagement, x(i) is either the normalized degree or the normalized betweenness centrality of the ith agent based on the strategy chosen, and a is the weight given to the user engagement part. For these experiments too, the
256
S. S. Meegada and S. Kandaswamy
Fig. 3 Time steps taken by the combination seeding strategies for varying values of user engagement weight a
above mentioned parameters are the same and the weight for user engagement a is varied from 0 to 1 in steps of 0.1. In this experiment too, PA and SW networks are used. We also compare the combination strategies, against their plain strategy counterparts to analyze how user engagement can be used to enhance their performance. While the average target distance for HDUE and HBUE was comparable for all values of a, the weight given to user engagement, the time taken by both of the combination strategies was different from each other in the PA network, for a values of 0.4 and 0.5. These results are statistically significant ( p < 0.05). The HDUE strategy outperforms HBUE. The time taken by HDUE is 258.5 time steps less than HBUE when a = 0.4 and 235.2 time steps lesser when a = 0.5. This can be seen in the graph (Fig. 3) where we plotted the average time taken by the seeding strategies against the time steps taken. When the value of a is low, the combination strategies become similar to plain strategies, because of the low weight of user engagement, and hence produce similar results as the plain strategies. When the value of a is high, both combination strategies become similar to each other, as high weight is given to user engagement. When the value of a is 0.4 and 0.5 in a PA network. The improvement between HDUE and HD is statistically significant ( p < 0.05). The time taken by HDUE is 306.6 time steps less than HD when a = 0.4 and 266.8 time steps less when a = 0.5. Perhaps, people wanting to seed their information would find it valuable to consider not only the number of connections but also, the amount of interaction the connections have with the individual. In the real world too, an individual might be perceived as influential if a lot of people interact with their posts, in the form of liking, commenting and sharing the posts.
Comparison of Viral Information Spreading Strategies in Social Media
257
Experiment 3: Here, we use Bonacich centrality strategy as it goes beyond the degree approach and also accounts for the connections in the immediate neighborhood of the agent. We use Bonacich centrality in the PA and SW networks. This strategy is compared against the plain High Degree strategy to observe the improvement in performance. In the PA network, the time taken by Bonacich centrality strategy is significantly lower than HD. Bonacich centrality based seeding outperforms High Degree seeding by 205 time steps ( p < 0.05). The fact that the degree distribution in PA network is skewed, may have contributed to this behavior. Perhaps, people seeking to promote their information should look beyond just the number of followers a person has, and also look into the immediate neighborhood of that person as well. Now, we use the Sequential Seeding strategy as it doesn’t involve a constant attitudinal value, but starts with a central stance, and then gradually increases the polarity of the information. The parameter s, the number of time steps to reach the target position is varied as follows: 10, 20, 50, 75, 100. In the Small world network, a significant difference in time taken compared to high degree, was observed when the number of time steps to reach target position, s = 75. ( p < 0.05). Sequential Seeding with high degree outperformed plain high degree by 159.08 time steps. In the Preferential Attachment network, a significant difference in time taken was observed when s = 20, s = 75 and s = 100. ( p < 0.05). Sequential Seeding outperforms High Degree by 212.92, 182.24, and 182.96 time steps for s = 20, s = 75 and s = 100 respectively (Fig. 4). Perhaps, people seeking to promote their information are better off starting at a central attitude, and can gradually increase their polarity after gaining the attention of people. Experiment 4: We wanted to test what would happen if the objective was not to influence the opinion of people, but to rather maximize the number of posts they click on. In the real world this is similar to how automated targeted ad placements work. We use the Maximum Click-through process in PA and SW networks. At each step, we create 3 infobits to maximize the click-through. These are created according to the clustering of the users, that is, the infobits are placed in the top 3 areas where users cluster around. These clusters are identified by measuring the number of users in a radius of 5 units. The results show that the mean infosharer distance is significantly lower when click-through maximization is turned on, than when it is off. The mean infosharer distance is lower by 0.036 distance units in the PA network ( p < 0.05) and lower by 0.042 distance units in SW network ( p < 0.05). This may imply that the social media systems may cause the formation of tighter and more constricted echo chambers, even when the original motive was to just keep the user engaged.
258
S. S. Meegada and S. Kandaswamy
Fig. 4 Time steps taken by Bonacich centrality, sequential seeding and high degree in various scenarios
5 Conclusion In this paper we have explored the effect of different seeding strategies on a model of modern social media networks. In addition, we propose a new strategy based on user engagement and tested the combinations of these strategies. Our simulations show that the strategy which uses a combination of high-degree and user engagement to target individuals is very effective in improving information acceptance and attitude manipulation. Perhaps, in real world, this indicates that targeting users having many friends in social media, who also have a higher number of likes for their post, might be an effective way to spread viral information. We also propose a new strategy based on dynamically changing the attitude value of the information. The simulations show that this strategy is faster in influencing people. We also explored the effect of modern social media systems trying to engage people on their platform as much as possible, and how it involuntarily leads to the formation of more constricted echo chambers. The ramifications of such systems are currently being discussed [4] and hopefully in the future will lead to the mitigation of the negative effects.
Comparison of Viral Information Spreading Strategies in Social Media
259
References 1. Albert, R., Barabási, A.L.: Statistical mechanics of complex networks. Rev. Mod. Phys. 74(1), 47 (2002) 2. Bonacich, P.: Power and centrality: a family of measures. Am. J. Sociol. 92(5), 1170–1182 (1987) 3. Chen, W., Wang, Y., Yang, S.: Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 199–208 (2009) 4. Estrada-Jiménez, J., Parra-Arnau, J., Rodríguez-Hoyos, A., Forné, J.: Online advertising: analysis of privacy threats and protection approaches. Comput. Commun. 100, 32–51 (2017) 5. Geschke, D., Lorenz, J., Holtz, P.: The triple-filter bubble: using agent-based modelling to test a meta-theoretical framework for the emergence of filter bubbles and echo chambers. Br. J. Soc. Psychol. 58(1), 129–149 (2019) 6. Hinz, O., Skiera, B., Barrot, C., Becker, J.U.: Seeding strategies for viral marketing: an empirical comparison. J. Mark. 75(6), 55–71 (2011) 7. Hu, H.H., Lin, J., Cui, W.T.: Intervention strategies and the diffusion of collective behavior. J. Artif. Soc. Soc. Simul. 18(3), 16 (2015) 8. Kempe, D., Kleinberg, J., Tardos, É.: Maximizing the spread of influence through a social network. In: Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 137–146 (2003) 9. Marwick, A., Lewis, R.: Media Manipulation and Disinformation Online. Data & Society Research Institute, New York (2017) 10. Noor, P.: The fact that we have access to so many different opinions is driving us to believe that we’re in information bubbles’: Poppy Noor meets Michal Kosinski, psychologist, data scientist and professor at Stanford university. The Psychologist 30, 44–47 (2017) 11. Pariser, E.: The Filter Bubble: What the Internet is Hiding from You. Penguin UK, London (2011) 12. Stonedahl, F., Rand, W., Wilensky, U.: Evolving viral marketing strategies. In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, pp. 1195–1202 (2010) 13. Tambuscio, M., Ruffo, G., Flammini, A., Menczer, F.: Fact-checking effect on viral hoaxes: a model of misinformation spread in social networks. In: Proceedings of the 24th International Conference on World Wide Web, pp. 977–982 (2015) 14. Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393(6684), 440–442 (1998)
An Evidence-Driven Model of Voting and Party Competition Ruth Meyer, Marco Fölsch, Martin Dolezal, and Reinhard Heinisch
Abstract In this paper we report on the development of an agent-based model (ABM) simulating the behaviour of voters and the positioning of political parties in Austria. The aim is to create what-if scenarios taking into account contextual changes, such as political crises as well as changes in parties’ policy positions and voters’ attitudes. Drawing on data from the Austrian National Election Study (AUTNES) and the Chapel Hill Expert Survey (CHES), we are able to map both demand- and supplyside characteristics. We present first results of the simulation analysis of applied strategies of voters and parties. This way, we are able to create first what-if scenarios that show how results of elections would change, if voters applied different strategies when deciding which party to vote for. In developing a simulation for the case of Austria as a reference model, we lay the foundation for more universal applications of ABM in political science. Keywords Voting behaviour · Party competition · Agent-based simulation
1 Introduction Simulating agent behaviour in the face of threats to liberal democracy is a novel approach to understanding the challenges posed by radical populism and associated ideologies. Survey research and existing data can provide a snapshot of the attitudinal disposition of voters and provide us with causal explanations of how attitudes and political preferences are connected. Yet, such research can neither provide us with what-if scenarios, nor model effectively people’s behaviour under a variety of conditions and input factors, which would be necessary when wanting to develop and evaluate response strategies.
R. Meyer (B) Centre for Policy Modelling, Manchester Metropolitan University, Manchester, UK M. Fölsch · M. Dolezal · R. Heinisch Department of Political Science, Paris Lodron University of Salzburg, Salzburg, Austria © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_20
261
262
R. Meyer et al.
The objective of building a social simulation in the PaCE1 project is to study the phenomenon of populism by mapping individual-level political behaviour and explain the influence of agents on, and their interdependence with, the respective political parties. Voters, political parties and—to some extent—the media can be viewed as forming a complex adaptive system, in which parties compete for citizens’ votes, voters decide on which party to vote for based on their respective positions with regard to particular issues, and the media may influence the salience of issues in the public debate. Our approach has been to develop a set of valid simulations for one relevant case that we are able to evaluate based on survey data and available expertise on that political system. The following reasons led to our decision for Austria as a case study: 1.
2.
3.
Availability of data: The Austrian National Election Study (AUTNES) [1] is one of the most comprehensive national election studies. It covers a wide range of variables, including socioeconomic data, media content and media consumption data and specific attitudinal variables of political psychology. This enables us to base the modelled voters on data collected following the 2013 national election in Austria. With the Chapel Hill Expert Survey (CHES) [2] administered in 2014 supply-side data for the time shortly after the election is available. Relevant political events: The phase of increased migration to Europe in 2015, which also affected Austria in particular, falls into the period between the two national elections of 2013 and 2017. History of populism: Austria includes one of the longest established and most successful radical right populist parties, the Freedom Party of Austria (FPÖ), which not only had to contend with challengers from the radical right (BZÖ, Team Stronach) but also from the centre-right (ÖVP). Even more importantly, the FPÖ served two periods in government, including holding important ministerial portfolios, and successfully negotiated two leadership changes attesting to the party’s organizational depth and entrenchment in the Austrian political system.
2 Model Description Within political science, agent-based simulations are still a rarely used methodology [3, 4]. Most agent-based models of elections and party competition refer to spatial and rational choice models going back to Downs [5]. The dimensions of the political space in these models are usually interpreted as policy issues, e.g. economic left– right or social liberal–conservative. Research on the strategic behaviour of parties and voters started in the early 1990s with Kollman et al. [6] who investigated how two competing parties position themselves in a space defined by 15 issues when they 1
The Populism and Civic Engagement (PaCE) project is funded by the EU H2020 initiative under grant agreement no. 822337. Website: http://popandce.eu/.
An Evidence-Driven Model of Voting and Party Competition
263
are uncertain about the position of voters. Laver [7] reduced the political space to two dimensions but extended the number of parties to five, which is important for analyses of European party systems that are typically defined by patterns of multiparty competition. Like [6], he assumes voters’ issue positions to be stable. All parties want to increase their share of votes by positioning themselves strategically following one of four different strategies. The model was adapted [8] to allow for the emergence of new and the disappearance of old parties. While [7] tested his model using electoral data from Ireland, most studies of party competition using ABM were for a long time “an exclusively theoretical exercise” [9]. Muis’ study on party competition in the Netherlands [9] paved the way for combining simulations with real world data. Moreover, he extended the previous models by including the role of the media. The first study to apply ABM in research on party populism [10] explored how populist radical right parties position themselves in a political space to find their “winning formula”. The model includes the importance voters attribute to various issues and differentiates between a limited number of party strategies, mostly following [7]. The Austria model expands on this by combining empirical data with theories of voting and party behaviour to represent voters, parties, and their interaction in a political space. From the AUTNES and CHES surveys we identified seven common issues that are used as the dimensions of this space: • • • • • • •
economy (pro/against state intervention in the economy), welfare state (pro/against redistribution of wealth), budget (pro/against raising taxes to increase public services), immigration (against/pro restrictive immigration policy), environment (pro/against protection of the environment), society (pro/against same rights for same-sex unions), law and order (against/pro strong measures to fight crime, even to the detriment of civil liberties).
As the surveys use different scales to code responses (0–10 vs. 1–5 for most questions) the relevant CHES variables had to be re-coded to match the respective AUTNES variables to be able to map the positions of parties and voters into a joint space. For the visualisation of this political space the model user can choose 2 dimensions to be mapped to the x- and y-axes via model parameters. The model distinguishes two different types of agents: voters and parties. Voters are characterised by demographic attributes (age, sex, education level, income level, area of residence), political attitudes (political interest, party they feel closest to and degree of that closeness, propensities to vote for either of the parties) and their positions on all seven issues. They identify up to 3 of these issues as most important and assign weights to them according to their importance. Political parties are characterised by their name and their party programme, which is expressed as their stances towards the seven modelled issues. They all identify two to three of these as their most important issues and assign a weight to them. The behaviour of voters and parties is based on theories from the political science literature. Each party applies a strategy to position itself in the political landscape
264
R. Meyer et al.
(see Sect. 2.1) to attract voters, whereas voters use strategies to decide which party to vote for (see Sect. 2.3). In addition, voters may change their opinions on any of the policy issues, i.e. adapt their position in the political space. The opinion formation process used in this version of the model is detailed in Sect. 2.2. Informal political discussions with family, friends or other acquaintances have been found to influence political attitudes and behaviours of voters [11, 12]. The social network of voters is thus an important component of a model of voting. While empirical data on networks is rare, studies have shown that the size of political discussion networks is small: people tend to talk to 0–5 other people about politics [13]. In absence of explicit data for the Austria case study, our model adopts a plausible algorithm with both random and homophilic aspects: each voter forms links with 0–2 other voters, choosing the most similar in age, education, and residential area from a pool of randomly chosen individuals. Since links are bi-directional, this results in a social network where nearly all voters have between 0 and 5 connections to other voters. The initial state of the model represents the political situation in Austria at the time of the national election 2013. All agents are initialised with empirical data from existing surveys: the 2013 AUTNES for the voters and the CHES administered in 2014 for the parties. The former consists of the responses of 3266 participants whereas the latter includes expert opinions on the positions of the seven major Austrian political parties (SPÖ, ÖVP, FPÖ, Grüne, NEOS, BZÖ, and Team Stronach) at the time. The model assumes discrete time steps, with one step equaling one week in real time. To be able to compare model results with real data from the Austrian national election 2017, we ran all simulations for 208 steps (4 years). Each step the following processes are carried out in the same order: 1. 2.
3. 4.
Parties calculate their current vote share and how this changed in comparison to the previous step. Voters have political discussions with other voters, which may result in changing their positions on one or more issues. They also adapt the importance the discussed issues have for themselves. Voters are ‘polled’, i.e., they decide on which party they would currently vote for according to their strategy. Parties decide to adapt their positions according to their strategy.
2.1 Party Strategies Parties strive to increase their share of votes by positioning themselves strategically in the political space. To do so, they may apply different strategies to adapt their positions on policy issues. We implemented the four strategies outlined by [10]: 1.
An Aggregator moves towards the average position of their current supporters in all dimensions. It thus adapts to the ideological stances of their supporters.
An Evidence-Driven Model of Voting and Party Competition
2.
3.
4.
265
A Satisficer behaves like an Aggregator but stops moving once the aspired vote share is reached or surpassed and only starts moving again if the loss of votes passes a certain threshold. A Hunter keeps moving in the same direction if they gained vote share with their last move, otherwise they turn around and choose their next direction with some variability. The version of this strategy implemented in the model restricts movement to the two most important issues of the party. A Sticker does not change any of their positions and sticks with their party programme.
Each party is assigned one of these strategies at model initialisation. In the simulations reported here, the two major parties (SPÖ, ÖVP) use the ‘Aggregator’ strategy, the populist FPÖ applies the ‘Hunter’ strategy, and all other parties are ‘Stickers’. The party roles were assigned based on the following rationale: The large centre parties pursue median voter strategies and thus tend to aim for broad appeal trying to “aggregate” voters and build broad centrist electoral coalitions. Smaller parties are associated with a particular issue that works for them and maximize the support in certain voter segments. They tend to stick with the policies that work for them and match their brand image. The FPÖ is neither a centre party nor a small party. Thus, it can neither be content with a niche strategy nor with pandering to its supporters but keeps foraging for votes. In experiments with an earlier version of the model the Aggregator strategy could lead to the two major parties (SPÖ, centre-left, and ÖVP, centre-right) swapping ideologies in one or more dimensions. This is possible because the strategy purely searches for the centre position of the current supporters without constraints. We therefore adapted this strategy so that parties only change positions on their most important issues.
2.2 Opinion Formation While parties may adapt their positions in the n-dimensional policy issue space according to their strategy (see previous section), voters in current agent-based models of party competition usually remain in place. It is common practice to assume that public opinion on policy issues follows a normal distribution [14] and does not change over time. [10] is a rare example of an ABM using empirical data—in this case, a survey of the Dutch voting population held before the 2006 parliamentary elections—to initialise voters’ positions in the policy issue space, but even their voters do not change their opinions during the simulation. Our model is innovative in that it both uses empirical data to initialise the voter agents and implements social processes to allow voters to adapt their positions over the simulated time. Change of opinion happens through political discussions with other voters. In the model version reported here we apply a modified multi-dimensional opinion
266
R. Meyer et al.
dynamics approach [15], which stipulates mechanisms for voters to (a) select interaction partners and (b) adapt their position on the issue under discussion. While interaction partners are selected randomly from the total population, the two will only interact if their ideological distance falls under a certain threshold (bounded confidence model). We follow [15] in that this threshold is different for each voter, depending on their ‘affective level’ or emotional involvement in policy issues. To avoid random allocation of values to voters we decided to use their level of political interest to represent this attribute, which is available from the empirical data. We measure ideological distance as Euclidean distance of voters’ positions on the issue under discussion. As the result of an interaction, voters may adapt their opinions. The mechanism proposed by [15] involves both interaction partners changing their opinions on all modelled dimensions. We find this assumption unrealistic. Instead, we assume that each discussion only involves one dimension (policy issue) and that any change therefore only applies to this issue, following [16]. There are two possible outcomes of an interaction: • Compromise: If the two voters agree on a majority of the other issues, they will move towards each other’s position on the discussed policy issue. The total distance moved grows with the voters’ ideological distance but is never greater than a certain maximum value set via a model parameter. • Repulsion: if instead the voters disagree on most of the other issues, they will move further apart from each other on the discussed dimension.
2.3 Voter Decision Strategies One area that our model improves on is the incorporation of different decision strategies for voters regarding party choice. It is common practice in existing agent-based models of the complex system of voters, parties and their interactions to assume that (a) all voters use the same strategy and (b) this strategy is choosing the ideologically most proximate party, i.e. the party closest to them in all modelled dimensions. In the terminology of Lau et al. [17] this is called Classic Rational Choice. The authors propose and test a set of five types of strategies that are applied when reaching a decision about party choice. Classic Rational Choice defines voters as actively searching for information on all issues and parties. Voters compare all parties and decide after careful considerations. Whereas rational choice decision making starts at zero, Confirmatory decisionmaking is heavily influenced by voters’ long-term relations to parties, such as their party identification. For example, if the election is run by individual candidates, such as presidential elections in many European countries, these voters need only to find candidates’ party affiliation to decide which candidate they prefer. Fast and Frugal, by contrast, assumes that voters are primarily motivated by efficient decision making. Voters do compare the positions of parties but restrict this effort to the most important issues. The heuristic-based fourth strategy is similar, but decisions can be taken
An Evidence-Driven Model of Voting and Party Competition
267
based on various heuristics provided by numerous sources such as discussions with friends and neighbours—not only by a direct comparison of, for example, policy positions. Gut decision-making, finally, is strictly affective; voters do not search for any kind of information, at least not systematically. We operationalized these strategies for our model as follows: • Rational choice: A voter chooses the party closest to them on all modelled issues (Euclidean distance in seven dimensions). • Confirmatory: A voter chooses the party they feel closest to (taken from the AUTNES 2013 data). • Fast and frugal: A voter chooses the party closest to them on their most important issues (weighted Euclidean distance in two dimensions). • Heuristic-based: A voter follows recommendations of people they trust and chooses the party most of their friends will vote for. • Going with gut: A voter chooses the party they have the highest propensity to vote for (taken from the AUTNES 2013 data). At model initialisation, each voter is assigned one of the strategies. For this we must solve the problem of how to fit voters to strategies. First experiments with random allocation according to specified proportions of strategy types were deemed unsatisfactory. While [17] report some correlations of demographic or political variables with strategy types (e.g. “rational choice is particularly high among women, young people and respondents with high levels of political interest”) these are relatively vague and not unambiguous. To attempt an improved allocation of strategies to voters we restricted the pool of AUTNES participants to the subset who voted for one of the parties represented in the model (1060 respondents). We then allocated the rational choice strategy to those who actually voted for their ‘rational choice’ (party closest to them in all seven dimensions at model initialisation). The confirmatory strategy was allocated to voters who voted for the party they felt closest to, whereas fast and frugal was allocated to those voting for the party they deemed best able to solve their most important issue. The heuristics-based strategy (following recommendations of friends) was allocated to voters who are generally trusting in people, do discuss politics sometimes and have family and friends who are interested in politics, while the gut decision making strategy was allocated to all voters with low political interest and knowledge. As anticipated, this did not solve the problem as only 31% of voters ended up with exactly one strategy. Another third had two strategies, 15% had three and 2% even had four possible strategies allocated to them, while about 18% of respondents could not be assigned at all via these categories. Nevertheless, we decided to utilize this—albeit slight—improvement over a completely random strategy allocation. All simulations reported here are run with the subset of 1060 voters and strategy allocation at model initialisation employs a mixture of direct assignment (the one pre-determined strategy for a third of the voters) and random selection (pick one of the pre-determined strategies for about half of the voters and any one of the five strategies for the rest) under the constraint that the specified proportions of strategies (a model parameter) is met.
268
R. Meyer et al.
2.4 External Influences Given that voters and parties do not exist in a vacuum only concerned with themselves or each other but are influenced by events happening in the world around them, it is necessary to take extraneous influences into account. The events deemed most influential during the period 2013 to 2017 that we are covering with the simulation are the refugee crisis of 2015/16 and the leadership change in the ÖVP shortly before the election in 2017. As the new leader emphasised the topic of immigration above all else, we represent this change in leadership by adapting the most important issues of the ÖVP accordingly at the correct time during the simulation. This has the effect that the ÖVP will then start moving on the ‘immigration’ issue in addition to the ‘economy’ and ‘spend vs. taxes’ issues. To also account for the sharp change in leadership style with the new party chair reorienting the ÖVP, we introduced ideal positions for parties, defining where the party wants to head in the policy issue space. The ‘Aggregator’ strategy can then be adapted to pursue a path weighing its supporters’ positions against the party’s own ideological ideal positions as suggested by [14]. The new ideal positions are taken from the 2019 CHES dataset. To cover the effects of the refugee crisis on the political landscape we need to account for a change in issue salience in the public opinion over time. While some topics stay close to the heart of people (for Austria e.g. unemployment), others gain and lose in importance in the public opinion. The media is involved in this process and may act as an amplifier or filter by applying their agenda-setting power [18]. In the absence of detailed media analysis data for the specified period in Austria we have chosen to use issue salience in the public opinion as available in the Eurobarometer series of surveys published by the European Commission as a proxy. The Eurobarometer contains two to three data sets per year for the time period in question. We are focussing on the answers to the question “What do you think are the two most important issues facing (OUR COUNTRY) at the moment?” for Austria. After matching the Eurobarometer categories to the seven issues represented in our model, we rescaled the data so that the sum of all issues equals 100%. Figure 1 shows the resulting time series. The sudden spike in the salience of the ‘immigration’ topic coinciding with the refugee crisis of 2015/16 is clearly visible. The salience values for each issue along with the respective dates converted to simulation time are stored in a suitable data structure at model initialisation so that they are easily accessible during the simulation. The model keeps track of the currently ‘valid’ salience values for the seven issues and changes them at the predetermined points in simulation time to the new values for the next period. To emulate the media’s influence on voter opinion, these values are applied as probabilities to select the topic to talk about during voter interactions.
An Evidence-Driven Model of Voting and Party Competition
269
Fig. 1 Salience of the modelled seven issues in the Austrian public opinion over time. (Adapted from Eurobarometer survey data)
3 Simulated Scenarios We have undertaken experiments with our model to investigate the effect of different voter decision strategies. Specifically, we looked at the following scenarios: (a) all voters using rational choice, (b) all voters using fast and frugal, and (c) the electorate is divided into five groups, each using a different strategy. All simulation runs use the same model specification: • 1060 voters, initialised from the AUTNES dataset; • 7 parties, initialised from the CHES dataset; • Party strategy assignation as follows: SPÖ and ÖVP use ‘Aggregator’, FPÖ uses ‘Hunter’, all other parties (Greens, BZÖ, NEOS, Team Stronach) use ‘Sticker’; • Opinion formation process with set voter adaptation threshold (1.0), discussion frequency (1), maximum distance per position change (0.5) and maximum salience change (3); • A time step represents a one-week period, the simulation thus runs for 208 steps representing 4 years; • 20 runs per scenario, with the same set of 20 different random number seeds. The following figures show time series of the parties’ vote shares taken from typical runs. As can be clearly seen, the type and mix of voting decision strategies present in the population of voters have a huge impact on the outcome of the simulated elections. If all voters apply the ‘Rational Choice’ strategy as is usual in other models, the SPÖ wins a comfortable majority of the votes, while the populist FPÖ comes in as the second largest party (see Fig. 2). The conservative ÖVP, however, is relegated to the small parties instead of being one of the two major ones. The change in leadership shortly before the 2017 elections (at simulation time step 189) does nothing to prevent this outcome; on the contrary, it results in losing the party some additional votes.
270
R. Meyer et al.
Fig. 2 Evolution of vote shares over time with all voters using ‘Rational Choice’
Surprisingly, the sudden rise in salience of the ‘immigration’ topic does not seem to have any influence on the vote shares. Single runs differ slightly in the exact shape of the time series and the percentages parties achieve at the end, but the overall results are the same and diverge greatly from the actual election results in 2017. This indicates that the assumption all voters can correctly be modelled as “being rational” does not hold, at least not for Austrian voters. Experiments with ‘Fast and frugal’ as the single voter strategy show a very different outcome. This strategy lets voters concentrate on their two most important issues and weigh their distance to the parties’ positions with the importance they give these issues. As can be expected, the change in issue salience in the public opinion—and consequently, in individual voter’s assessments—has a dramatic effect on the vote shares of the different parties. While in more than half of the runs the ÖVP wins an absolute majority (see an example in Fig. 3, left), in a few cases (two runs) the FPÖ happens to be the lucky winner, while in the third category (six runs)
Fig. 3 Evolution of vote shares with all voters using the ‘Fast and frugal’ strategy
An Evidence-Driven Model of Voting and Party Competition
271
Fig. 4 Evolution of vote shares with a mix of voter decision strategies
ÖVP and FPÖ battle it out between them (see Fig. 3, right). All other parties are relegated to inconsequent participants in the political arena. In a last scenario, we applied a mix of strategies: 18.3% rational choice, 29.8% confirmatory, 38.5% fast and frugal, 4.9% heuristics-based, and 8.5% going with your gut. The proportions have been derived from our analysis of the AUTNES data (see Sect. 2.3). In this scenario, the SPÖ consistently comes up as the second largest party, losing either to the ÖVP or the populist FPÖ. The sudden rise in salience of the ‘immigration’ issue (starting about half way through a simulation) is clearly visible in the rise of the vote shares of the party managing to claim the ‘sweet spot’ in the voters’ opinions for themselves. The different runs can be categorized into three different cases: (i) The ÖVP rises together with the ‘immigration’ issue and beats the FPÖ, who cannot maintain its impetus (majority of runs, see an example in Fig. 4, left); (ii) the reverse case, in which FPÖ and ÖVP swap roles (one run); (iii) ÖVP and FPÖ take turns in profiteering from the ‘immigration wave’ and battle for the top spot (five runs). The example shown in Fig. 4 (right) is particularly interesting in that it manages to qualitatively reproduce the trends in opinion polls between the 2013 and 2017 elections,2 where after a long period of a stable lead for the FPÖ the ÖVP sees a sudden gain (due to the change in leadership), which secures them the election win. These results demonstrate that the empirically based mix of strategies is a necessary but not sufficient requirement to obtain results close to the observed historical data with our model. Our next steps will be to undertake further investigations of the conditions leading to “successful” runs.
2
https://en.wikipedia.org/wiki/Opinion_polling_for_the_2017_Austrian_legislative_election.
272
R. Meyer et al.
4 Discussion and Conclusion As our simulations show, the type and mix of voting decision strategies present in the population of voters have a huge impact on the outcome of the simulated elections. The mix of strategies understandably leads to the most realistic outcomes. While none of the experiments exactly replicate the election results of 2017, this is to be expected from a model that—even though based on comparatively rich empirical data—has to make assumptions where data and behavioural theories leave gaps. Coming qualitatively close to real-world opinion polls is therefore quite an achievement. Simulating voting behaviour accurately and in the context of a rapidly changing political environment is extraordinarily difficult and has thus rarely been attempted. The literature bridging the gap between simulations and empirical election research is exceedingly thin as there are numerous hurdles to overcome—both in terms of the subject matter to be simulated and the theoretical as well as epistemological assumptions underlying the different fields involved. Primarily, we want to test the feasibility of applying agent-based modelling to the research of voting behaviour, especially in the context of demand for, and supply of, populism. In doing so, we hope to go beyond the traditional tools of political science and thus generate insights of voting research as is practised so far. Why is this useful given that empirical voting research can generate relatively accurate predictive and explanatory models on voting behaviour? There are two major reasons: First, we hope to be able to shed light on what-if scenarios, which is otherwise notoriously difficult in normal social science research because it is not equipped to handle counterfactuals, as these would be considered speculative. Second, the PaCE project aims at developing counter strategies, which may require (the simulation of) purposefully changing certain input factors to see their effect. This is something we hope the computer simulation can accomplish better than the traditional analysis of causal relations.
References 1. Kritzinger, S., Zeglovits, E., Aichholzer, J., Glantschnigg, C.; Glinitzer, K., Johann, D., Thomas, K., Wagner, M. (2017): AUTNES pre- and post panel study 2013. GESIS Data Archive, Cologne. ZA5859 Data file Version 2.0.1. https://doi.org/10.4232/1.12724 2. Polk, J., Rovny, J., Bakker, R., Edwards, E., Hooghe, L., Jolly, S., Koedam, J., Kostelka, F., Marks, G., Schumacher, G., Steenbergen, M., Vachudova, M., Zilovic, M.: Explaining the salience of anti-elitism and reducing political corruption for political parties in Europe with the 2014 Chapel Hill Expert Survey data. Res. Politics 4(1), 1–9 (2017) 3. Johnson, P.: Simulation modeling in political science. Am. Behav. Sci. 42(10), 1509–1530 (1999) 4. Kollman, K., Page, S.: Computational methods and models of politics. In: Tesfatsion, L., Judd, K. (eds.) Handbook of Computational Economics, vol. 2, pp. 1433–1463. Elsevier/North Holland, Amsterdam (2005) 5. Downs, A.: An Economic Theory of Democracy. Harper Collins, New York (1957)
An Evidence-Driven Model of Voting and Party Competition
273
6. Kollman, K., Miller, J., Page. S.: Adaptive parties in spatial elections. Am Polit. Sci. Rev. 86(4), 929–937 (1992) 7. Laver, M.: Policy and the dynamics of political competition. Am. Polit. Sci. Rev. 99(2), 263–281 (2005) 8. Laver, M., Schilperoord, M.: Spatial models of political competition with endogenous political parties. Philos. Trans. R. Soc. B: Biol. Sci. 362(1485), 1711–1721 (2007) 9. Muis, J.: Simulating political stability and change in the Netherlands (1998–2002): an agentbased model of party competition with media effects empirically tested. J. Artif. Soc. Soc. Simul. 13(2), 4 (2010) 10. Muis, J., Scholte, M.: How to find the ‘winning formula’? Conducting simulation experiments to grasp the tactical moves and fortunes of populist radical right parties. Acta Politica 48(1), 22–46 (2013) 11. Huckfeldt, R., Sprague, J.: Discussant effects on vote choice: intimacy, structure, and interdependence. J. Politics 53(1), 122–158 (1991) 12. McClurg, S.: Social networks and political participation: the role of social interaction in explaining political participation. Polit. Res. Q. 56(4), 449–464 (2003) 13. Lake, R., Huckfeldt, R.: Social capital, social networks, and political participation. Polit. Psychol. 19(3), 567–584 (1998) 14. Laver, M., Sergenti, E.: Party Competition: an Agent-Based Model. Princeton University Press, Princeton (2012) 15. Schweighofer, S., Garcia, D., Schweitzer, F.: An agent-based model of multi-dimensional opinion dynamics and opinion alignment. Chaos Interdisc. J. Nonlinear Sci. 30(9), 093139 (2020) 16. Baldassarri, D., Bearman, P.: Dynamics of political polarization. Am. Sociol. Rev. 72(5), 784– 811 (2007) 17. Lau, R., Kleinberg, M., Ditonto, T.: Measuring voter decision strategies in political behavior and public opinion research. Public Opin. Q. 82(S1), 911–936 (2018) 18. McCombs, M.: Setting the Agenda: the Mass Media and Public Opinion. Polity Press, Cambridge (2004)
Shrinking Housing’s Size: Using Agent-Based Modelling to Explore Measures for a Reduction of Floor Area Per Capita Anna Pagani , Francesco Ballestrazzi, and Claudia R. Binder
Abstract To shrink the environmental footprint of housing, reducing dwellings’ size is key. There is agreement among scholars on the measures that should be taken to achieve this goal, however their effectiveness and effects have not been sufficiently investigated. In this paper, we explore and compare the outcomes of measures for reducing housing size. We use ReMoTe-S, an empirical agent-based model that simulates the residential mobility of Swiss tenants. Results show that an increase in floor area per capita is predominantly the consequence of a discrepancy between housing demand and supply. On the demand-side, findings indicate that enabling the formation of multigenerational households is the most successful measure, while helping relocating tenants to more easily find groups to join is the least effective. On the supply-side, we observe that increasing the diversity of dwellings’ sizes leads to an important reduction in sqm/tenant where rules restrict the minimum number of occupants per dwelling the most. With regard to these rules, findings display a moderate reduction of individual space consumption when preventing households whose children have moved out from under-occupying their dwelling. We conclude that efforts from both the housing demand- and supply-side are needed to achieve a reduction in housing size. Keywords Housing size · Switzerland · Empirical agent-based modelling · Sustainability
1 Introduction House size is the largest determinant of domestic energy consumption. A greater floor space entails a larger need of energy for heating and cooling, ventilation, and lighting, and allows for the operation of more and potentially bigger appliances [1]. As dwelling size is on the rise, the number of members composing a household is decreasing globally, which results in an increase in floor area per capita [2]. A. Pagani (B) · F. Ballestrazzi · C. R. Binder École Polytechnique Fédérale de Lausanne EPFL, 1015 Lausanne, Switzerland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_21
275
276
A. Pagani et al.
In response to this trend, recent studies have identified and agreed upon measures and recommendations to promote ‘sufficiency’ in housing via financing strategies, public policies, and the engagement with the many different housing stakeholders (planners and architects, housing providers, residents, etc.) [1–4]. Although first examples of successful implementation of the proposed approaches can be found in the literature (e.g. [1]), their effectiveness and effects over time remain mostly unexplored. By supplementing traditional scientific methods, dynamic models can play a key role in this context [5]; in particular, agent-based models (ABM) have largely been used to test hypotheses and undertake experiments with the goal to explore emergent patterns at the macro level (e.g. average space consumption) due to interactions at the micro level (e.g. between households’ preferences, i.e. demand, and the dwellings available to them, i.e. supply; [6]). This paper investigates which measures are the most successful in reducing floor space per capita and thereby mitigating one of the major drivers of energy consumption in housing. For this purpose, we simulate, explore and compare several scenarios using ReMoTe-S, an agent-based model of the residential mobility of Swiss tenants. Based on empirical research on the tenants of three Swiss multifamily housing owners, the model allows us to account for the reciprocal effects between households’ preferences and dwellings. The remainder of this paper is structured as follows. In the next two subsections, we present previous research findings on obstacles to shrinking housing size in the context of Swiss rental housing (Sect. 1.1), based on which we outline the set of measures investigated in this study (Sect. 1.2). Section 2 illustrates the ABM and the construction of the simulated scenarios, the results of which are shown in Sect. 3. We conclude with a discussion of the results, recommendations for the three housing providers and suggestions for future research (Sect. 4).
1.1 Previous Research in Switzerland This paper builds upon the results of a survey of tenants renting from two housing cooperatives and one insurance company and asset manager in Switzerland (N = 968), the analysis of which led to the identification of several obstacles for a reduction in housing size [3]. Relevant to mention is, firstly, the preference for larger dwellings, which predominantly concerned the households renting from the private asset manager with sufficient financial resources at their disposal. In fact, contrary to the latter, housing cooperatives control the floor space per capita via occupancy rules, which regulate the number of occupants per bedroom and whose compliance is regularly checked. Secondly, research has shown that another major obstacle to downsizing is the difficulty of finding suitable dwellings to relocate to. This issue encompasses several interconnected structural and logistical barriers [4], which include but are not limited to the very low vacancy rate in Switzerland (2.7% [7]), and the inadequate supply of
Shrinking Housing’s Size: Using Agent-Based Modelling …
277
housing of different sizes. In particular, the current housing stock does not efficiently accommodate the growing number of one- and two-person households [1]—resulting from, among other reasons, the reduced availability of kin to cohabit with [2, 8]. Lastly, a reduction of housing size was found to be hindered by other non-monetary costs associated with moving, among which the fear of disrupting bonds when relocating (especially if the lack of supply pushes households to move to another neighbourhood, or when attachment and memories are associated with the years spent in a home [3, 4]). When no occupancy rules are in place, these costs can lead to underoccupancy, e.g. for those households whose children have left the nest. A first corroboration of these findings was made using ReMoTe-S [9], an empirical agent-based model of Swiss tenants’ residential mobility. Based on the same dataset of [3], the ABM was used to explore the reciprocal effects between housing supply and demand via simulation experiments targeting relevant aspects of housing sustainability, among which housing size. In particular, one of the experiments aimed to investigate the effects of changes in the size of dwellings supplied in the model on individual space consumption. Two scenarios were simulated, where the average number of rooms per dwelling was set to 3.5 (medium-to-large dwellings) and 1.5 (small dwellings). It emerged that a supply that prioritizes medium-to-large size dwellings in combination with less strict occupancy rules and the relative affluence of the tenant agents can result in (i) a greater average floor area per capita and (ii) a number of one-person households comparable to a scenario offering small dwellings only. These preliminary results revealed the potential of using ReMoTe-S to study strategies to shrink housing’s size.
1.2 Measures to Reduce Floor Area Per Capita In light of the findings illustrated in Sect. 1.1, this paper uses ReMoTe-S to simulate and compare a set of measures to reduce floor area per capita: 1.
2.
3.
Regarding the dwellings (i.e. supply): to provide a more diversified offer of housing, able to accommodate different household sizes and potentially allowing tenants to relocate in the same building. This measure can be explored by varying the standard deviation of the number of rooms per dwelling; Regarding the occupants (i.e. demand): to prevent tenants from forming oneperson households as a consequence of an unsuccessful search for people to cohabit with. This strategy can be explored by facilitating the formation of larger households via (i) multigenerational housing and (ii) the provision of more information to tenants about available dwellings. Regarding occupancy rules: to prevent the underoccupancy of shrinking households. This measure can be explored by applying stricter occupancy rules, i.e. forcing households to relocate once all the children have left the nest.
278
A. Pagani et al.
Fig. 1 Example of a tenant agent, the household it belongs to, the dwelling it inhabits and the building where the latter is located. The household’s attribute ‘desired functions’ represents the preferences of a household for a dwelling, which is attributed a set of ‘functions’. Num: number
2 Methods 2.1 Agent-Based Model ReMoTe-S is an agent-based model of the residential mobility of Swiss tenants. Its goal is to provide a holistic understanding of the reciprocal influence between households and dwellings, and thereby inform a sustainable management of the housing stock it simulates. Based on explicit assumptions of a survey of residents renting from three housing providers, the model lies in a spectrum between a toy and a ‘complicated’ model [6, 10]. A detailed description of the model can be found in [9] (under review).1 Agents and their dynamics. The model comprises four classes of agents characterised by several state variables (Fig. 1). Each tenant belongs to a household, which lives in a dwelling contained in a building. Agent dynamics are simulated using global parameters and according to the passage of time: for instance, while progressing in age, agents are born, die, and in-migrate in the model following rates and rules that depend on the household type; households’ residential satisfaction decreases of 0.1% with every additional time-step spent in the dwelling; a tenant’s (and therefore household’s) salary increases yearly and can be disrupted by a job loss; dwellings and buildings are built, demolished and renovated depending on fixed rates.
1
For more details, the ODD and code can be retrieved at the following link: https://www.comses. net/codebase-release/45117bff-8627-4ab9-a4e4-bb26e79a662e/.
Shrinking Housing’s Size: Using Agent-Based Modelling …
279
Fig. 2 Overview of the process, including initialisation (t 2 ) and two submodels: decide to move (t 1 ) and select (t 2 )
Process overview. The relocation process is simulated on a step-wise monthly basis, where the initialisation of the model (t 0 ) is followed by the first submodel ‘decide to move’ (t 1 ) and the second submodel ‘select’ (t 2 ; Fig. 2). At t 0 , the model is populated with heterogeneous dwelling and household agents; the distribution of their attributes is based on the survey dataset, official statistics, or, when necessary, set stochastically (see footnote 1). The simulation starts after all households are matched with the dwellings available (for a total of 1000). The progress of households in their family, work, and/or residential life course career is intertwined with endogenous and exogenous triggers, which generate a desire to move at a random time step following pre-defined frequencies (e.g. children leaving, leaving the flatshare, divorce). Triggered agents are synchronously updated as movers (t 1 ), and are sequentially activated to search for either vacant dwellings or joinable groups (t 2 ). As agents are updated asynchronously, the dwellings that have been occupied first are not available for the next searching agent (i.e. ‘first comes first served’). In most cases, the household engages in the search for a new dwelling for a duration of max. 12 months, after which it is forced to out-migrate. We assume that at every time step the agent ‘visits’ and filters a number of randomly-selected vacancies according to a set of ‘conditions’ (e.g. affordability), puts the filtered dwellings in a list, and ranks them according to the satisfaction they potentially generate, which is calculated based on the match between a household’s set of desired functions (i.e. preferences) and the set of functions of the selected dwelling (e.g. 4: status symbol; 8: shelter; Fig. 1). Eventually, the household moves to the first dwelling on the list. For cooperative members and following specific triggers (i.e. a renovation), the owner facilitates the households’ move by directly assigning them a vacancy that meets the aforementioned conditions. The search for a joinable group occurs in case of a divorce, when leaving a flatshare, or when leaving the parental home. When looking into a pre-defined number of randomly-selected households, a moving tenant applies the following selection criteria:
280
1. 2. 3.
A. Pagani et al.
Singleness: the household is not in a couple; Compatible age: one of its members (randomly-chosen) is maximum 10 years older/younger than the searching tenant; Occupancy: There must be room for a new tenant (only for flatshares) according to occupancy rules. More specifically, tenants renting from the asset manager can occupy dwellings with a maximum of two rooms more than the number of household members; for cooperatives, this rule is restricted to a maximum of one room.
When a one-person household is found, the newly-formed couple looks for a dwelling to move to using the same criteria described above; when an existing flatshare is found, the searching tenant is directly integrated in their dwelling. If the tenant has not found a compatible household to join, s/he will then create a one-person household and search for a vacancy for 1-time step. Calibration, verification and validation. These steps, illustrated in [9], consisted in (i) adjusting the number of households that in-migrate in the model (i.e. parameter ‘immigration rate’) to reproduce the very low vacancy rate that characterizes the Swiss housing rental market, (ii) checking whether the agents behaved as expected by following a household over its life course, and (iii) checking the plausibility of the model’s assumptions and outputs across all modelling stages [11].
2.2 Simulations To explore the measures presented in Sect. 1.2, we simulated four scenarios under the assumption of unchanged agents’ behaviours and demographic trends. The scenarios were selected based on the results of a one-factor-at-a-time sensitivity analysis (OFAT) over four parameters of interest (Table 1), whose performance was assessed by looking at the metric ‘sqm/tenant’, i.e. the average floor space per capita. As the two housing cooperatives showed similar results, we selected only one of them for comparison with the asset manager. To account for the stochastic variation of parameter values and to estimate the model’s long-term behaviours we used averages of Table 1 Scenarios simulated. HH: household; P: probability, ‘−’: baseline Experiment
Parameter varied SD #rooms
Compatible age
HH visited
P(relocate)
Baseline
1.0
±10 years
50
0.127
Diversified offer
6.0
–
–
–
Multigenerational
–
±30 years
–
–
Larger network
–
–
950
–
Shrinking HH
–
–
–
1.0
Shrinking Housing’s Size: Using Agent-Based Modelling …
281
50 simulations runs over 30 years; the effects of the model ‘warm up’ were filtered out, corresponding to 23 time-steps. The scenario ‘diversified offer’ consisted in providing a similar offer of small to large dwellings. In the model, the size of a dwelling (measured in sqm) is computed based on the number of rooms, whose distribution is controlled by a parameter set according to survey results (min = 1, max = 10, M = 3.5, SD = 1.0). To vary the offer of dwelling sizes, we changed the standard deviation of the number of rooms per dwelling. The scenarios ‘multigenerational’ and ‘larger network’ were designed to facilitate tenants’ search for joinable households by increasing (i) the compatible difference in age between members of a group and (ii) the number of groups a tenant ‘visits’ at every time-step during the search. Finally, the scenario ‘shrinking household’ was designed to trigger the relocation of shrinking households when the last child moves out (i.e. turns 19 y/o). This was done by varying the frequency (i.e. probability) of the trigger ‘children leaving’.
3 Results Figure 3 shows the results of the OFAT over the four parameters of Table 1. Overall, we observe that all measures lead to a decrease in floor space per capita, except for ‘larger network’, where an increase in the number of households a searching tenant visits doesn’t show any effect on how well occupied the dwellings are. The measure ‘multigenerational’ yields the largest effect. Compared to the baseline (M = 38.8, SD = 1.20), when the compatible difference in age for couples and flatshares is increased to 30 y/o (i.e. when a tenant of 50 can join another of 20 or 80 y/o), we observe a reduction of more than 4 sqm/tenant on average (M = 34.5, SD = 1.23). The second largest effect is for the strategy ‘diversified offer’. Increasing the standard deviation of the number of rooms per dwelling exhibits a sharp decline in the average floor area per capita in the sample up to a value of 3 (-2.40 sqm/tenant; M = 36.3, SD = 1.22), and a light decrease up to a value of 6 (M = 36.1, SD = 1.30). Lastly, increasing the probability of a household to relocate after the last child has moved out (i.e. scenario ‘shrinking household’) shows a moderate effect on the reduction of individual space consumption. If all households were to relocate after the trigger (P = 1.0), the latter would decrease by about 1.8 sqm/tenant (M = 37.0, SD = 0.90). Figure 4 allows us to look closer at extreme parameter values for each scenario and compare them in terms of sqm/tenant and the number of one-person households in the model. Concerning the cooperative, we observe that enabling the formation of multigenerational groups (i.e. compatible age = ±30 years) and increasing the diversity of dwelling sizes (i.e. SD = 6; Table 1) lead to the strongest decrease in sqm/tenant. In
282
A. Pagani et al.
Fig. 3 Sensitivity analysis of the floor space per capita to the variation of four parameters. Average over 50 runs and 337 time-steps
other words, a better adaptation of households’ size to the offer of dwellings simulated in the model (i.e. via an increase of larger and consequent reduction of smaller households) yields the same effect as a better adaptation of the offer to the size of households initialized and in-migrating in the model (i.e. via an increase in the SD of the number of rooms per dwelling). This is not the case for the asset manager, for which the two measures show different outcomes. More specifically, the multigenerational scenario exhibits the strongest decrease in sqm/tenant, which results in a space consumption close to the baseline scenario of the cooperative. This finding is of interest considering that the asset manager does not apply strict occupancy rules, and thereby displays the most space-consuming households. The adjustment of supply to demand, instead, has a smaller effect than that observed for the cooperative owner. On the one hand, a greater diversity in housing size brings about a reduction of one-person households—which were predominantly under-occupying 3-rooms apartments; on the other hand, a larger SD implies a greater number of large dwellings, which, as per occupancy rules, are most often less efficiently occupied than the cooperative ones (see Sect. 2.1). For both types of housing owners, the scenario ‘shrinking household’ lies inbetween the strongest and lightest effects on floor space per capita and contributes to
Shrinking Housing’s Size: Using Agent-Based Modelling …
283
Fig. 4 Floor space per capita (left) and number of one-person households (right) per housing owner for the baseline and other four scenarios. Average over 50 runs and 337 time-steps
a better fit between supply and demand, bringing about a decrease, although small, in the average number of one-person households. Lastly, and as previously observed, a larger network does not show any relevant effect on individual space consumption. On the opposite, by creating more competition between searching tenants, it leads to even greater difficulty in finding compatible people to cohabit with.
284
A. Pagani et al.
4 Discussion and Conclusion The goal of this paper was to explore and compare measures that target a reduction of floor space per capita and thereby contribute to the goal of mitigating one of the major drivers of energy consumption in housing, i.e. housing size. For this purpose, we used ReMoTe-S, an empirical agent-based model that simulates the reciprocal influence between households and dwellings in the framework of residential mobility. The advantage of the model lies in its parametrisation, which is based on real-wold data on the tenants of two Swiss housing cooperatives and a private asset manager, and thus accounts for the context specific factors characterizing the three owners. Below, we discuss the findings presented in this paper and their limitations and conclude by outlining recommendations for the three Swiss housing providers as well as future avenues of research.
4.1 Results in Perspective The simulations presented in this paper targeted the need to overcome several obstacles to a reduction of housing size, which the baseline scenario accounted for. In particular: 1.
2. 3.
4.
The preference for large housing: via occupancy rules allowing the households renting from the asset manager to occupy dwellings with two rooms more than the number of group members; The inadequate offer of dwelling sizes: via a distribution that privileges 3- and 4-rooms dwellings over smaller or larger ones; The large number of one- and two-persons households: via rules that restrict groups’ compatibility (i.e. age difference) and the easiness to find them (i.e. number of households visited); The reluctance of households to relocate: via a low probability to move after the household shrinks.
The simulation of measures to overcome these barriers demonstrated that an increase in floor area per capita is predominantly the consequence of a discrepancy between demand and supply. More specifically, a more diverse offer of dwelling sizes can lead to an important reduction in floor area per capita when occupancy rules control the minimum amount of household members in a dwelling. In fact, and conversely, the lack of a housing supply able to accommodate small households is forcing cooperatives to ‘adapt’ (i.e. ease) their occupancy rules accordingly (see ABZ in Zurich2 ). However, when occupancy rules allow for more freedom (i.e. noncooperative housing), the effect of a more diversified offer of dwelling sizes on individual space consumption is not as relevant as expected. In fact, if the housing 2
https://www.abz.ch/erleben/belegungsvorschriften/. Accessed 27 Apr 2021.
Shrinking Housing’s Size: Using Agent-Based Modelling …
285
stock were to offer a more equally distributed size of dwellings without enforcing occupancy rules, housing would keep being under-occupied. From better accommodating smaller households, the interest therefore shifts to increasing their size. For the asset manager, enabling the formation of multigenerational groups results in a reduction of floor space per capita even similar to the application of occupancy rules. The many advantages of co-living, micro-living and shared living—i.e. larger households, sharing spaces—in reducing housing size while meeting households’ needs have been largely discussed in the literature (see e.g. [12, 13]) and explored in practice (see e.g. the concept of ‘cluster’ implemented by the cooperative CODHA3 ). On the other hand, helping tenants to more easily find groups to join with unchanged rules to form groups showed no effect on floor space per capita. This finding reflects the obstacles posed by the very low vacancy rate in Switzerland, which can be overcome only via a more efficient match between the housing stock and its occupants. Lastly, our findings have shown that the relocation of shrinking households to dwellings of a more appropriate size is relevant for reducing floor area per capita, which is of importance considering that the share of people aged 65 years or more living in under-occupied dwellings in Switzerland was above 60% in 2018 [14]. This measure aligns with the policy already adopted by certain cooperatives (see footnote 2), which, however, are not always strictly applied (e.g. due to a lack of smaller dwellings to move to, or to prevent the loss of one’s established social environment when alternatives in the surroundings are not available [1]).
4.2 Limitations Before providing recommendations for the housing providers simulated in this paper, relevant limitations have to be acknowledged. Firstly, the model rests on assumptions derived from a survey of tenants renting from three Swiss housing owners; our findings are context dependent and do not aim for generalisation [15]. Secondly, and for the same reason, the ABM only accounts for the determinants of residential mobility investigated in the survey and thereby excludes, for instance, the rental market dynamics (e.g. rent evolution), which also encompass key obstacles to downsizing. Thirdly, due the setting of the occupancy rules—which are more flexible than what theorized (see footnote 2)—the average floor space per capita in the model (about 39 sqm/tenant) does not accurately depicts the one of the tenants’ survey (46 sqm/tenant) but is close to Swiss statistics (41 sqm/tenant [16]). Nevertheless, considering that the simulation experiments were aimed to compare differences between scenarios of space occupancy within the model, this discrepancy doesn’t affect our interpretation of the results. Fourthly, it is relevant to mention that ReMoTe-S simulates the preferences for housing at the household scale; at the scale of a relocating tenant agent, criteria for the selection of a 3
https://www.codha.ch/fr/soiree-cluster-12-04-16. Accessed 27 Apr 2021.
286
A. Pagani et al.
dwelling are reduced to compatibility with age, type, and occupancy rules, the latter of which indirectly accounts for space preferences. Lastly, it should be considered that the simulated measures might affect a wider number of parameters than what discussed in this paper. For instance, varying the age for group compatibility also impacts the creation of couples, which might increase in number and bring about other unexplored effects; also, increasing the standard deviation of the number of rooms per dwelling might generate a greater vacancy rate, considering that the model offers the same number of dwellings but with different sizes and not an increase in the number of smaller and larger dwellings.
4.3 Recommendations and Future Research Housing size may continue to pose one of the greatest environmental challenges of the twenty-first century [8]. The increasing size of dwellings, exacerbated by a concomitant decrease in household size, underlines the urgency to envision strategies that improve the match between housing supply and demand, the success of which entails societal benefits beyond reducing energy consumption [4]. Our results have shown that, above all, occupancy rules are a key tool to reduce floor area per capita, as they allow for the adaptation of the demand to any type of supply. As already advocated in previous work [3, 9], these rules should be extended to dwellings belonging to providers other than the cooperatives. For occupancy standards to be implemented successfully, however, non-monetary costs of moving should be mitigated, i.e. by assisting downsizing households in finding a new dwelling (e.g. prioritising moves to smaller dwellings, providing alternatives in the same building [3]). Furthermore, these rules may be inapplicable when the offer of dwellings is incompatible with the structure of the demand (i.e. lack of small dwellings for smaller households). Therefore, as our findings indicate, dwellings of different sizes should be provided for this purpose. On the other hand, to avoid reinforcing the formation of one-person households, investing in a culture of sharing is crucial. More specifically, although a reduction in individual space consumption requires efforts from the demand-side, the supply of dwellings can play a key role in supporting them. As shown by cooperative and other communal housing projects, obstacles to shared living can be mitigated by the supply of residential buildings that simultaneously reduce personal space, preserve occupants’ privacy, and provide shared rooms and facilities (e.g. music rooms, storage space, guest rooms; see e.g. footnotes 2 and 3). In summary, ‘sufficiency’ [17] can only be achieved if accompanied by the provision of adequate housing, meaning dwellings that fulfil households’ needs and preferences. In this regard, future research should explore combinations of the investigated measures to include efforts from both the demand-and the supply-side (e.g. diversified offer+multigenerational housing). While doing so, it should especially look
Shrinking Housing’s Size: Using Agent-Based Modelling …
287
at other key indicators to evaluate housing’s sustainability: households’ satisfaction; months waited before relocation; vacancy rate; etc.
References 1. Lorek, S., Spangenberg, J.H.: Energy sufficiency through social innovation in housing. Energy Policy 126, 287–294 (2019). https://doi.org/10.1016/j.enpol.2018.11.026 2. Ellsworth-Krebs, K.: Implications of declining household sizes and expectations of home comfort for domestic energy demand. Nat. Energy. 5, 20–25 (2020). https://doi.org/10.1038/ s41560-019-0512-1 3. Karlen, C., Pagani, A., Binder, C.R.: Obstacles and opportunities for reducing dwelling size to shrink the environmental footprint of housing: tenants’ residential preferences and housing choice. J. Hous. Built Environ. (2021). https://doi.org/10.1007/s10901-021-09884-3 4. Huebner, G.M., Shipworth, D.: All about size?—The potential of downsizing in reducing energy demand. Appl. Energy. 186, 226–233 (2017). https://doi.org/10.1016/j.apenergy.2016. 02.066 5. Filatova, T., Parker, D., van der Veen, A.: Agent-based urban land markets: Agent’s pricing behavior, land prices and urban land use change. J. Artif. Soc. Soc. Simul. 12 (2009). http:// jasss.soc.surrey.ac.uk/12/1/3.html. 6. Sun, Z., Lorscheid, I., Millington, J.D., Lauf, S., Magliocca, N.R., Groeneveld, J., Balbi, S., Nolzen, H., Müller, B., Schulze, J., Buchmann, C.M.: Simple or complicated agent-based models? A complicated issue. . Environ. Model. Softw. 86, 56–67 (2016). https://doi.org/10. 1016/j.envsoft.2016.09.006 7. Partner, W.: Residential Market (2020). https://www.wuestpartner.com/publications/immobi lienmarkt-schweiz-2020-2. Accessed 23 Apr 2021 8. Bradbury, M., Peterson, M.N., Liu, J.: Long-term dynamics of household size and their environmental implications. Popul. Environ. 36, 73–84 (2014). https://doi.org/10.1007/s11111-0140203-6 9. Pagani, A., Ballestrazzi, F., Massaro, E., Binder, C.R.: ReMoTe-S. Residential Mobility of Tenants in Switzerland: an agent-based model. J. Artif. Soc. Soc. Simul. (2021) 10. Schulze, J., Müller, B., Groeneveld, J., Grimm, V.: Agent-based modelling of social-ecological systems: Achievements, challenges, and a way forward. J. Artif. Soc. Soc. Simul. 20 (2017). https://doi.org/10.18564/jasss.3423 11. Boero, R., Squazzoni, F.: Does empirical embeddedness matter? Methodological issues on agent-based models for analytical social science. J. Artif. Soc. Soc. Simul. 8, 6 (2005). https:// www.jasss.org/8/4/6.html 12. Harris, E., Nowicki, M.: “GET SMALLER”? Emerging geographies of micro-living. Area 52, 591–599 (2020). https://doi.org/10.1111/area.12625 13. Williams, J.: Shared living: reducing the ecological footprint of individuals in Britain. Built Environ. 28, 57–72 (2002). https://www.jstor.org/stable/23288551. 14. Eurostat: Ageing Europe—statistics on housing and living conditions. https://ec.europa.eu/ eurostat/statistics-explained/index.php/Ageing_Europe_-_statistics_on_housing_and_living_ conditions. Accessed 27 Apr 2021 15. Knoeri, C., Binder, C.R.., Althau, H.-J.: An agent operationalization approach for context specific agent-based modeling. J. Artif. Soc. Soc. Simul. 14 (2011). https://www.jasss.org/14/ 2/4.html 16. FSO: Swiss Federal Statistical Office—Construction and Housing. https://www.bfs.admin.ch/ bfs/en/home/statistics/construction-housing/dwellings.html. Accessed 13 July 2020 17. Princen, T.: The Logic of Sufficiency. MIT Press, Cambridge, MA (2005)
The Problem with Bullying: Lessons Learned from Modelling Marginalization with Diverse Stakeholders Themis Dimitra Xanthopoulou , Andreas Prinz , and F. LeRon Shults
Abstract While building a simulation model to gain insights on bullying interventions, we encountered challenging issues that forced us to reconsider the concepts we model. We learned lessons about the need for quality assurance and a more demanding construction process when building models that aim to support decision making. One of the lessons is that even academically accepted concepts such as “bullying” can be ambiguous. Experts and interested parties do not agree about how to define and use the term bullying. Indeed, before we can model “bullying”, we need a shared understanding, of its meaning. Otherwise, insights from the model could be misinterpreted and lead to misleading conclusions. Concepts are inherently imprecise and contain grey areas. Although this may be true, not all of them are ambiguous. For the scope of this paper, ambiguity implies that the same word is used to point to different concepts. For different reasons, bullying has evolved to point to different concepts for different people and sometimes even for the same person. We propose to solve these challenges by identifying which concrete bullying behaviors to target, and by focusing on simulation models for interventions addressing those behaviors. Keywords Bullying · Social simulation · Formalization · Interventions · Agent-based modeling
1 Introduction There are several reasons why social simulation appears to be a promising tool for research on (and the facilitation of) conflict resolution. First, the formalization of theories and causal claims about a conflict within a computational model themselves help to clarify the tangible issues surrounding the conflict and to foster dialogue about possible ways of resolving it [1]. Moreover, a single computational architecture T. D. Xanthopoulou (B) · A. Prinz University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway e-mail: [email protected] F. L. Shults University of Agder, Universitetsveien 19, 4630 Kristiansand, Norway © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_22
289
290
T. D. Xanthopoulou et al.
for an “artificial society” can integrate multiple disciplinary perspectives, which is crucial when dealing with complex interpersonal or inter-group conflicts [2]. Finally, insights from simulation experiments can inform debates among stakeholders and policy-relevant decisions in the real world [3]. At the beginning of this project, we viewed bullying as a complex type of social conflict that would benefit from a social simulation approach. A simulation model of bullying would improve existing intervention programs and proposed solutions, which so far have had mixed results [4–6]. The topic of bullying has been researched by multiple disciplines such as criminology, psychology, sociology etc. In a telling comment about the state of bullying research, one of the keynote speakers at the AntiBullying Forum expressed the opinion that there is no theory of bullying. We believed that a social simulation of “bullying” could help to integrate the different points of view, enable intervention testing, provide reasons and solutions for inconsistent outcomes, and contribute to a robust theory of bullying. Based on these premises, we set out to create a model of bullying. Our model did turn out to be useful, but our efforts with various stakeholders to improve the model led to some important lessons. In Sect. 2, we describe our methodology to create a bullying model. We present the results of our efforts in Sect. 3 and discuss them in Sect. 4. From there, we propose a solution in Sect. 4.3 and conclude in Sect. 5.
2 Methodology The goal of the planned methodology was to achieve the construction of a simulation model of bullying for interventions. More specifically, the aim of the model was to understand the emergence of bullying and to test which interventions would be successful in preventing this emergence. The purpose can be classified as explanatory [7] insofar as we were trying to figure out both the causal architectures of bullying and to determine the reasons behind mixed outcomes in established intervention programs. However, we can also describe the model’s purpose as exploratory in the sense that it attempts to provide a deeper understanding of the target system [8] and a deeper exposition of theories [7] around the concept of bullying. Our plan was to first capture the important dynamics and then add components to the model so that we can map the intervention mechanisms. To begin with, the construction of a social model is typically the work of an interdisciplinary team [9]. A social model of bullying needs a bigger network of collaborators because it requires more perspectives due to the nature of the subject. Moreover, bullying behaviors are considered a complex issue [10]. The construction of complex models is comprised of multiple steps. With each step, the model is extended and becomes more and more complex. Finally, the explanatory character of the model alongside the prospect of supporting decision making means that our model needs to meet better quality standards. To meet the quality requirements, to address the complexity, and to account for the lack of bullying expertise in our team, we designed a methodology that drew
The Problem with Bullying: Lessons Learned from Modelling
291
on the principles of the agile programming approach: “iteration” and “flexibility” [11]. We planned the following steps: The first step was to conduct initial literature search, operationalise bullying, and select the model focus. The second step was the construction of a working simulation model of bullying to act as a starting point, a minimum viable product (MVP) to initiate the feedback sessions. The third step is the presentation of the first version to subject matter experts for feedback. Next, we planned to correct and expand the first model. The fifth step was the presentation of the second model version to subject matter experts for feedback and request data for validation (where possible). The process was to be repeated until a satisfactory level of agreement and model capacity was reached. The final step was the validation of the model using interviews. The goal was to use the model to understand the impact of interventions. All in all, our planned input was: literature search, feedback sessions with experts, and interviews. The input methods were to be used as supplements, where needed, throughout the model construction process.
3 Results In this section, we summarize the results of the model construction process. First, we explain the modelling choices for the first version of the bullying model. Then, we present the feedback we received from our first session with stakeholders. Finally, we display our findings from the intensive literature search and interviews regarding the bullying concept.
3.1 First Modelling Choices We chose to focus on university bullying due to our context. Currently, there are two dominant definitions of bullying corresponding to different paradigms [12]. The oldest one, introduced by Olweus, which we will call the “Olweus definition” defines bullying as the “negative actions” of one or multiple persons towards one person, “repeated” over time, when the actor/s and the receptor of the behavior have “asymmetric power relationship” [13]. To build our first model version, we selected a recently evolved definition, which we will call “Schott and Søndergaard definition”, that views bullying as “... an intensification of the processes of marginalisation that occur in the context of the dynamics of inclusion/exclusion, which shape groups. Bullying happens when physical, social, or symbolic exclusion becomes extreme, regardless of whether such exclusion is experienced and/or intended.” [12]. The second definition reflects a new paradigm on viewing bullying that puts more focus on the social dynamics component. This choice was the outcome of discussions with our main bullying subject matter expert.
292
T. D. Xanthopoulou et al.
We decided we would use the agent-based approach for our simulation model as it allows the observation of emergent phenomena. This approach organises the subject matter knowledge around three main components: agents, attributes (environment, agent, general), and behaviors (including events). Such an approach is very close to the modes of thought of interventionists and adapts well to the needs of the jurisdictional system. Drawn from the selected bullying definition, the behavior we chose to model is the exclusion of university students from dyadic interactions, and negative experiences in dyadic interactions during leisure time at the university. We considered a student to be bullied when the percentage of negative interactions and exclusion experiences to overall interactions is high.
3.2 Diverse Feedback To invite feedback from subject matter experts, we presented our work at the AntiBullying Forum [14]. We enquired whether our first version seemed intuitive and what additions we would need to make to proceed with a more intuitive model. The model presentation was in the form of a poster with a simplified explanation of how the model works as well as discussions stemming from the poster presentation. Most of the participants at the Anti-Bullying Forum were not familiar with computer modelling. Bullying experts and practitioners reserved a neutral attitude towards the first model version during the feedback sessions. They did not seem triggered negatively by our model with its agent characteristics and rules, but they did not endorse it either. An exception was our selected definition, which raised some questions due to the conflict among the different paradigms. Our primary goal during our interactions with conference participants was to invite their suggestions for improvement based on their individual understandings and research agenda. The input we received was very diverse, a fact that might not be so surprising since bullying experts come from such diverse disciplinary backgrounds. Each recommendation translated into one or more model variables, and one or more agent behaviors. Our initial model included 23 parameters, some of which are mathematical with the potential to contain up to 100 elements. Table 1 presents the input we received divided in 5 categories.
3.3 Reflection on the Modelling Process and Further Literature Research The next step after the Anti-Bullying Forum was the correction and extension of the bullying model, which proved a challenging task. Gaining a distance from the modelling process and moving to a period of reflection enabled us to acknowledge this
The Problem with Bullying: Lessons Learned from Modelling Table 1 Input grouped in categories Personality Psychological needs Personal tendency to isolate Resistance to belonging
Aggressiveness
General acceptance
293
Personal skills
Context
Social
Self-efficacy
Friendship networks
Social capital
Acceptance from Sociometric friends status and perceived popularity Belonging Numb blindness Connection and disconnection from self and others Questioning personal perception Social and emotional learning Filtering information Goal setting
Teacher as agents Social impact
Social influence Peer influence
Social norms
Effect of role models Effect of leaders
difficulty and to distinguish it from the usual challenges of the modelling process. Our challenge was to select the aspect which would be added to the next model version. The diversity of the feedback shows that bullying researchers hold different views on what is considered crucial in determining bullying behaviors. This realisation is intensified by the fact that we received feedback from a limited amount of people. Upon attendance of different sessions at the Anti-Bullying Forum , we discovered even more perspectives that might be included in our model. To make the most of the reflection process, we decided to supplement our input with additional literature searches. The next subsections present our findings. Different Understandings of Bullying Except for the different academic definitions and thus understandings of what bullying is [12], we found out that perceptions of bullying differ between academics and non-academics. In some studies [10, 15], researchers have noticed that views of what constitutes bullying did not coincide with the dominant definition at the time (the Olweus definition). To be more specific, in the study with teachers [10], teachers did not consider parts of the definitions important to classify a behavior as bullying and one teacher changed what she considered bullying after hearing the definition from the researchers. In addition, in the study mentioned in [15], students changed their answers in a bullying survey after being given the definition. Interestingly enough, teachers did not judge the characteristics
294
T. D. Xanthopoulou et al.
of the behavior itself to evaluate whether an observed behavior is bullying, but also factors such as a student’s fitness to be called a victim, their judgement on whether the student “deserved” the behavior, the “normalcy” of the behavior, and the student perception (as suggested by the second paradigm) [10]. The study mentioned in [16] also showed that the emotional effect of the behavior on the student at the receiving end of it counted as a factor for whether other students classified the behavior as bullying. Measuring Bullying One way to assess bullying is by using surveys. Cornell and Bandyopadhyay [17] point out that some surveys employ definitions to clarify what they mean by bullying while others use simpler versions of one of the definitions, which include ambiguous elements. Furthermore, the Juvenile Victimization Questionnaire avoided definitions altogether and asked 2 questions for the categories “bullying” and “teasing and emotional bullying”. The categories were specified with the behaviors “chasing, grabbing hair or clothes, making you do something you did not want to” for the bullying category, and “feeling bad or scared because of calling names, saying mean things, or saying they did not want you around” for teasing or emotional bullying. The Olweus Bully/Victim Questionnaire (referring to the cited version [18]) uses the combination of definitions and behavior lists to achieve concept clarification. It includes more behavior categories than the Juvenile Victimization Questionnaire, such as intentional exclusion from a group of friends. In addition, they present more behavior examples including the general description of “other hurtful things like that”, which is open to interpretation. As the questionnaire proceeds, it introduces the Olweus definition to restrict what counts as bullying and adds a note that says not to include playful teasing and behaviors that are not repetitive or behaviors between individuals without power imbalance. Apart from surveys that ask people whether they have been bullied, there are surveys that ask others to nominate who has been bullied [17]. These surveys seem to operate under the same methods of concept clarification. Another method to measure bullying instances are naturalistic observations. In one example, observers counted as bullying the “aggressive events” in which there is a power relationship between the aggressor and the receiver of the aggressive behavior [19]. Finally, apart from the survey-based-interviews or interviews that followed surveys [17], researchers have utilized interviews to explore bullying. It is not clear how interviewers measure bullying. In study [20], the authors mention that they based their assessment on the description of experiences but do not refer to whether they exposed interviewees to their perceptions of bullying definitions or examples of bullying behaviors. Similarly, in study [21], the authors mention the fact that they asked parents and children whether their children or they themselves had been bullied but the authors do not explicitly state whether they provided definitions or examples of bullying behaviors to interviewees to understand their perception of bullying. Evolution of the Bullying Concept It turns out that “bullying” evolved into an umbrella concept that accommodates various and quite diverse behaviors [22].
The Problem with Bullying: Lessons Learned from Modelling
295
According to Schott and Søndergaard [12], the concept history traces to the term “mobbing”, understood as the attack on one person by a group of people. Later, with the help of the media, the term bullying started to convey behaviors with varying intensity and effect. Cohen et al. give the examples of non-physical behaviors such as social exclusion, criminal behaviors such as predatory sex crimes, mutual teasing, and rough-housing to account behaviors that were given the label of bullying. At the same time, they mention that the concepts of “bully” and “victim” changed in such a way that most children can be categorized as either the former or the later [22]. Possibly connected to the evolving character of the bullying concept, researchers have discovered perceptual differences between different stakeholders and researchers when it comes to what is categorized as bullying [10, 16, 23, 24]. Informal Interviews We conducted interviews to further investigate the concept of bullying. We discussed bullying with several colleagues to test the variance even within one institution, namely, our university. The University of Agder has established a report system that is visible immediately when one visits the main university website [25]. The link for the report system contains information about how to use the system alongside information about bullying. However, it does not list the behaviors that fall into the category of “bullying”. We chose to interview colleagues from different departments, in different positions (organisational or research related), and in various levels of decision making regarding bullying issues. We asked them to describe what bullying constitutes for them, to point to specific behaviors and whether the behavior we had included in our model was registered under the bullying concept from their perspective. Most of our interviewees faced difficulty when trying to identify bullying behaviors. Interestingly enough, considering the small number of interviewees, they did not agree on whether the behavior we modelled in our model was a bullying behavior. The hypothesis that there is a shared understanding of the bullying concept at our university was falsified.
4 Discussion From Sect. 3.3, we can extract the following: – Academics and non academics do not agree oh what is important to define bullying. In general, people have diverse views on what criteria to use to define a behavior as bullying. – The categorization of bullying in practice does not involve only the assessment of the action itself but subjective factors such as the effect on the “victim”, how much the observant likes the “victim” etc. This might explain why one observed behavior might be interpreted differently by different observants. – Bullying is evolving to include more and more behaviors. Nevertheless, we cannot be certain that everyone has the same access to the new aspects of the concept.
296
T. D. Xanthopoulou et al.
The disagreement on characterization criteria, the subjectiveness in evaluation, the inclusion of more and more behaviors, and the different access to the concept evolution make the concept of bullying unmanageable. In this section, we start by explaining the issues behind modelling “bullying” based on our findings. We then continue with the evaluation of the different ways we talk about bullying, characterized by different abstraction levels in our mission for a modelling alternative.
4.1 Expectations Behind Modelled Concepts Before we go into the analysis we should refer to concept fuzziness and ambiguity. Concept imprecision or fuzziness implies that there are grey zones in concepts. When encountered with a grey zone, we are not sure whether the concept can be applied to describe our observation. A “fuzzy” concept is something that cannot be avoided. Concerns over concept fuzziness have been addressed before such as in the case of the social model “The Status-Arena” and the concept of “Rough and Tumble” [26]. In essence, modelling helps with concept precision since it exposes aspects of each concept ontology. Term ambiguity implies that a term is used to describe two different concepts. An example is the term “crane”, used to describe a type of bird or a machine. Apart from the distinction between fuzzy concepts and ambiguous terms, there is the moral judgement of an observed action. Due to the subjective nature of morality, moral judgements of the same observation vary. We can hypothesize the following scenario: bullying starts by the meaning of “all against one physical violence”. This might have been a manageable use of the concept but then, the media extend the meaning of the term. In search of an explanation for deeply shocking events, such as student suicides and mass shootings [22], journalists tie more behaviors to the term. Researchers contribute to the trend by adding more dimensions to bullying with the development of definitions. Depending on individual media access and other sources, people develop the concept differently. On a collective level, bullying points to different concepts of actions ranging in intensity, context etc. It is very easy to identify “crane” as an ambiguous term since the two concepts involved do not have any similarities. It is much harder to identify bullying as ambiguous since the concepts involved are all interactions of some kind. We believe that a more accurate characterization of bullying is to say that it is a moral judgement. You will rarely hear someone endorsing “bullying” behaviors (maybe only in cases of intended revenge). We propose that the term developed to basically include negatively judged behaviors in interactions among people. This explains why teachers and students evaluate the effect of the behavior on the “victim”, and the personality of the “victim” to assess whether a behavior is bullying. It is because these factors affect their moral sensitivities. Such a term is very useful to assist the Anti-Bullying movement. While this is perfectly in line with the progress of a social movement, it is not helpful when it comes to being a modelled concept of actions. To successfully use an explanatory model of bullying, it is very crucial to agree on what actions constitutes bullying. Without a shared understanding, it is not possible
The Problem with Bullying: Lessons Learned from Modelling
297
for interested stakeholders, including decision makers, to know how and where model insights apply. However, the state of the term “bullying” and the status of model communication methods cannot promise clarity over what is modelled and what is not. Model communication is a set of techniques, such as model documentation, to help us illustrate what the model does and what it includes. Nevertheless, in practice, even if documentation is available, model specifics are not always understood [27]. Consequently, model documentation is not the best way to clarify our definition of bullying to people without modelling experience, such as bullying experts. They will typically assume that we follow their definition. In the following sections we try to identify an alternative to the use of the term of bullying for modelling purposes.
4.2 Exploration of Alternatives In Fig. 1, we have mapped different items related to the word bullying and ordered them by their abstraction level in an effort to find an alternative modelling content. Lower level of abstraction means that the item is more closely connected to the object, in this case the observed action. The first items correspond to the different definitions given by researchers. Different definitions point to different concepts. For example, the behavior we used in our first version model [28], would be categorized as bullying using the Schott and Søndergaard definition but would not under the Olweus. Definitions are still unfit to serve as a modelling content as non academics do not use their elements to categorize behaviors as bullying. Consequently, were we to employ definitions, we would face issues with validating our models as measurements do not typically include observations from academics. In addition, it would still be hard to communicate clearly insights and limitations of the model to interested stakeholders. Bullying modes are on a similar level of abstraction as definitions. Bullying modes specify behaviors by limiting them in a specific context. For example, physical bullying needs to include physical behaviors. Bullying modes are still ambiguous since they inherit the same properties as the term bullying in a more specified context. When we move on to the level of behaviors, we notice that it is easier to distinguish among different behaviors. For example, it is easy to say which behavior is a “namecalling” behavior and which is “hitting”. Modelling behaviors might imply that we depart from the term bullying since not all behaviors are unanimously categorized as bullying. In addition, as with the Rough and Tumble, we may still face grey areas and confusions over judgement. Nevertheless it is much clearer to distinguish between two types of behaviors and thus for theories and explanations to emerge.
298
T. D. Xanthopoulou et al.
Fig. 1 Bullying at different levels of abstraction
4.3 Proposed Solution We argue that the solution is to model concrete behaviors. A model of “hitting” or “name-calling” gives less space for speculation and does not introduce uncertainty about the results. Even though there is less confusion of what the model includes, these behaviors are still complex in nature. Some anticipated implications of following the proposed solution are: – The feedback from literature and experts used in the model construction process will be more straightforward. – Stakeholders will be forced to clarify what are the issues they need to solve instead of mentioning abstract notions such as “bullying”. – Stakeholders can readily apply insights from models without worries of misinterpretations. – Multiple models will need to be used in combination to achieve the resolution of a variety of behaviors. – On the negative side, we expect less people to be interested in these models since bullying has become a catching phrase.
The Problem with Bullying: Lessons Learned from Modelling
299
5 Conclusions and Word of Warning The feedback we received from stakeholders on our explanatory model of bullying led us into a deeper investigation of the concept using literature search and interviews. We found out that bullying is evolving and expanding, matches more than one concepts, and fits better the form of a moral judgement than an action concept. The ambiguity of the term leads to inconsistent measurements. Considering the term “bullying” as a moral judgement might explain why there is a big range of understandings of what bullying is, and why factors such as the relationship between observer and behavior recipient, and the result of the behavior, play a role in characterizing the behavior as “bullying”. Bullying definitions and bullying modes are less abstract ways to talk about the same issues. Nevertheless, even on this level of abstraction, we face similar discrepancies. We propose to model concrete behaviors and to move away from “bullying” so as to avoid misunderstandings regarding model usage. The conclusion is intensified by the low level of model communication. More work needs to be done to identify possible implications of using concepts of concrete behaviors to test for interventions. Our study concerns bullying, but the same issues appear whenever concepts are not only fuzzy, but ambiguous. The bullying community does not seem to be aware of the issue and the simulation community has not established procedures to assess the fitness of concepts. We encourage modellers to consider whether their modelled concepts might raise similar issues as the ones we faced. In that case, we suggest the further specification of models that target specific behaviors and we encourage modellers to avoid ambiguity that hinders theoretical clarity and successful practical interventions.
References 1. Shults, F.L., Gore, R., Wildman, W.J., Lynch, C., Lane, J.E., Toft, M.: A generative model of the mutual escalation of anxiety between religious groups. J. Artif. Soc. Soc. Simul. 21(4), 1–25 (2018) 2. Shults, F.L., Wildman, W.J.: Human simulation and sustainability: ontological, epistemological, and ethical reflections. Sustainability 12(23) (2020) 3. Mehryar, S., Sliuzas, R., Schwarz, N., Sharifi, A., van Maarseveen, M.: From individual fuzzy cognitive maps to agent based models: modeling multi-factorial and multi-stakeholder decisionmaking for water scarcity. J. Environ. Manag. 250(109482) (2019) 4. Vreeman, R.C., Carroll, A.E.: A systematic review of school-based interventions to prevent bullying. Arch. Pediatr. Adolesc. Med. 161(1), 78–88 (2007) 5. Jeong, S., Lee, B.H.: A multilevel examination of peer victimization and bullying preventions in schools. J. Criminol. 2013 (2013) 6. Smith, P.K., Ananiadou, K., Cowie, H.: Interventions to reduce school bullying. Can. J. Psychiatry 48(9), 591–599 (2003) 7. Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola Sales, C., Ormerod, P., Root, H., Squazzoni, F.: Different modelling purposes. J. Artif. Soc. Soc. Simul. 22(3), 6 (2019)
300
T. D. Xanthopoulou et al.
8. Ligmann-Zielinska, A., Siebers, P.O., Magliocca, N., Parker, D.C., Grimm, V., Du, J., Cenek, M., Radchuk, V., Arbab, N.N., Li, S., Berger, U., Paudel, R., Robinson, D.T., Jankowski, P., An, L., Ye, X.: “One Size Does Not Fit All”: a roadmap of purpose-driven mixed-method pathways for sensitivity analysis of agent-based models. J. Artif. Soc. Soc. Simul. 23(1), 6 (2020) 9. Xanthopoulou, T.D., Prinz, A., Shults, F.L.: Generating executable code from high-level social or socio-ecological model descriptions. In: i Casas, P.F., Sancho, M.R., Sherratt, E. (eds.) System Analysis and Modeling. Languages, Methods, and Tools for Industry 4.0, vol. 11753, pp. 150–162. Springer International Publishing, Cham (2019). https://doi.org/10.1007/978-3030-30690-8_9 10. Mishna, F., Scarcello, I., Pepler, D., Wiener, J.: Teachers’ understanding of bullying. Can. J. Educ./Rev. Can. l’éducation 28(4), 718–738 (2005) 11. Moyo, D., Ally, A.K., Brennan, A., Norman, P., Purshouse, R.C., Strong, M.: Agile development of an attitude-behaviour driven simulation of alcohol consumption dynamics. J. Artif. Soc. Soc. Simul. 18(3), 10 (2015) 12. Schott, R.M., Søndergaard, D.M. (eds.): School Bullying: New Theories in Context. Cambridge University Press, Cambridge (2014) 13. Olweus, D.: Aggressive Behavior: Current Perspectives. Springer, Boston (1994) 14. World anti-bullying forum (2019). https://worldantibullyingforum.com/previous-forums/ wabf-2019/. Accessed 21 May 2021 15. Vaillancourt, T., McDougall, P., Hymel, S., Krygsman, A., Miller, J., Stiver, K., Davis, C.: Bullying: are researchers and children/youth talking about the same thing? Int. J. Behav. Dev. 32(6), 486–495 (2008) 16. Guerin, S., Hennessy, E.: Pupils’ definitions of bullying. Eur. J. Psychol. Educ. 17(3), 249–261 (2002) 17. Cornell, D.G., Bandyopadhyay, S.: Handbook of Bullying in Schools: An International Perspective, 1st edn, p. 12. Routledge (2009) 18. Olweus, D.: The Olweus bully/victim questionnaire. https://www.researchgate.net/publication/ 247979482. Accessed May 2021 19. Hawkins, D.L., Pepler, D.J., Craig, W.M.: Naturalistic observations of peer interventions in bullying. Soc. Dev. 10(4), 512–527 (2001) 20. Lund, I., Ertesvåg, S., Roland, E.: Listening to shy voices: shy adolescents’ experiences with being bullied at school. J. Child Adolesc. Trauma 3(3), 205–223 (2010) 21. Christensen, L.L., Fraynt, R.J., Neece, C.L., Baker, B.L.: Bullying adolescents with intellectual disability. J. Ment. Health Res. Intellect. Disabil. 5(1), 49–65 (2012) 22. Cohen, J.W., Brooks, R.A.: The Cambridge Handbook of Social Problems. Cambridge University Press, Cambridge (2018) 23. Boulton, M.J., Trueman, M., Flemington, I.: Associations between secondary school pupils’ definitions of bullying, attitudes towards bullying, and tendencies to engage in bullying: Age and sex differences. Educ. Stud. 28(4), 353–370 (2002) 24. Naylor, P., Cowie, H., Cossin, F., de Bettencourt, R., Lemme, F.: Teachers’ and pupils’ definitions of bullying. Br. J. Educ. Psychol. 76(3), 553–576 (2006) 25. Speak up. https://www.uia.no/en/about-uia/speak-up. Accessed 23 May 2021 26. Hofstede, G.J., Student, J., Kramer, M.R.: The status-power arena: a comprehensive agentbased model of social status dynamics and gender in groups of children. AI Soc. (2018) 27. Kolkman, D.: The (in)credibility of algorithmic models to non-experts. Inf. Commun. Soc. 1–17 (2020) 28. Xanthopoulou, T.D., Puga-Gonzalez, I., Shults, F.L., Prinz, A.: Modeling marginalization: emergence, social physics, and social ethics of bullying. In: 2020 Spring Simulation Conference (SpringSim), pp. 1–12. IEEE (2020)
On the Impact of Misvaluation on Bilateral Trading Sacha Bourgeois-Gironde and Marcin Czupryna
Abstract Subjective biases and errors systematically affect market equilibria, whether at the population level or in bilateral trading. Here, we consider the possibility that an agent engaged in bilateral trading is mistaken about her own value of the good she expects to trade. Although it may sound paradoxical that a subjective private valuation is something an agent can be mistaken about, as it is up to her to fix it, we consider the case in which that agent, seller or buyer, consciously or not, given the structure of a market, a type of goods, and a temporary lack of information, may, more or less consciously, state an erroneous valuation. The typical context through which this possibility may arise is in relation with so-called experience goods which are sold while all their intrinsic qualities are still unknown (like, e.g. untasted bottled fine wines). We model that “private misvaluation” phenomenon. The agents can also be mistaken about how their exchange counterparts are themselves mistaken. We analyse and simulate the consequences of first-order and second-order private misvaluation on market equilibria and bubbles, and notably focus on the context where the second-order expectations about the other agent’s misvaluation are not met. Keywords Bilateral trading · Private value · Misvaluation · Second order misvaluation · Stubbornness
1 Introduction It is well known, since the seminal work of Chatterjee and Samuelson (1983) in [3], but see also [6], that bilateral trade under incomplete information cannot yield at equilibrium all the benefits from trade. We keep up with this basic mechanism to study the efficiency effect of misvaluation.
S. Bourgeois-Gironde Université Paris II, 12 Place du Panthéon, 75005 Paris, France M. Czupryna (B) Cracow University of Economics, ul. Rakowicka 27, 31-510 Kraków, Poland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_23
301
302
S. Bourgeois-Gironde and M. Czupryna
We model the situation where sellers or buyers in a bilateral trading setting can be mistaken about the value of the good they aim to exchange on the market. At first sight it can be counter-intuitive to allow for misvaluation in this context because agents can freely decide the value they assigns their goods. This self-assigned value will determine the equilibrium price. Value being a purely subjective factor, and regarded is in principle to be immune to misidentification. We consider the cases in which private valuation can be mistaken in certain ways and that such misvaluation can also affect exchange equilibria. A class of instances can be provided by so-called experience-goods, i.e. goods for which the evaluation would suppose some direct experience to be subjectively appreciated and valued but for which, let’s admit, the circumstances of trade could temporarily prevent such direct evaluation. We can think of wine-trading and suppose that at the moment of the exchange the objective quality of the wine is still partially unknown and no subjective appraisal has occurred (the bottle has not been opened and the wine tasted). The private value that a buyer (to a producer) or a seller (who generally knows the wine he produced but has not tasted this particular sample) can differ from the objective value on which the private value should be anchored. It means that if the buyer and the seller had further information on the real quality of the wine they would agree to change their valuation. Which entails that in the present circumstances of trading they can, consciously or not, misvalue the wine. In such a case, two subsequent phenomena can occur. Firstly, the traders can also estimate that the partner in trading actually misvalues the good in different directions (under- or over-value it). Each of them may then attribute to the other an interval of first-order misvaluation, which attribution defines what we label second-order misvaluation. Second, when trading attempts occur, the traders can react in different ways to the signal they receive and to the extent with which that signal differs from their own private value. Even if they think they are liable to misvaluation, they have no particular reason to think, due to the possibility of second-order misvaluation, that they should fully take into account the partner’s signal into the revision of their own private value. They most likely distribute into the revision process relative weights associated with their initial valuation and the signal they receive. We model this revision process under first-order and second-order misvaluation. Second, when trading attempts occur, the traders can react in different ways to the signal they receive and to the extent with which that signal differs from their own private value. Even if they think they are liable to misvaluation, they have no particular reason to think, due to the possibility of second-order misvaluation, that they should fully take into account the partner’s signal into the revision of their own private value. In particular, in our model and subsequent numerical simulations of this phenomenon, we crucially envision the case where one of the agents receives from the price quoted by the other agent a signal that lies beyond what he expected even when taking into account the possibility of that other’s agent misvaluation. The extent to which agents update their own beliefs when receiving a signal that lies beyond their valuation interval defines what we call stubbornness on the part of the agents, see [1]. Full stubbornness consists in considering that signal outside my predicted interval of the other’s valuation is no informative at all, and that I should
On the Impact of Misvaluation on Bilateral Trading
303
keep with my own beliefs. Total absence of stubbornness means the opposite. It could be said that in absence of stubbornness, I also lack confidence in my own valuation and my own beliefs in general, and decides to update them in function of the received signal, how discrepant it is with my initial expectations. Most likely, a typical agent will display a degree of stubbornness. For instance, if that agent expected that the quoted price would not exceed 20 but is actually 30, perhaps that agent will revise his expectation about the true value of the item to 25, showing thereby a certain level of revision flexibility, or, likewise, of partial stubbornness. In sum, private valuation can be mistaken due to intrinsic features of the good, structure of the market, and a psychological state of the agent, when a private value is held sincerely but could be revised, at least partially, under more information.1 The misvaluation phenomenon should therefore be distinguished from overconfidence, in the sense that with that latter bias, the temptation not to revise one’s initial private value under better informational circumstances is prevailing. However, even misvaluation in the sense in which we define it can lead to equilibrium failures and market bubbles. This is due to the fact that common knowledge of potential lack of information on both sides entails the phenomenon of second-order misvaluation and that there exists the possibility that received signals falls out the scope of these second-order misvaluation intervals.
2 Bilateral Trading We use the same framework as in [3]. Let us consider the special case and assume that value (reservation price in the original paper) for a seller vs is uniformly disthe value for a buyer vb is uniformly tributed over the interval v s , v s . Analogously distributed over the interval v b , v b . Furthermore we assume that the maximum possible value for a buyer is higher than a minimum possible value for a seller, v b > v s . Otherwise no trading would be possible. We also assume that the final price p is set based on the buyer bid price b and seller offer price s using the formula p = kb + (1 − k)s, for k ∈ [0, 1]. Parameter k can be interpreted as a bargaining power.
2.1 Over-and Under-Misvaluation We now assume that both buyers and sellers may experience some private misvaluation of the good to be exchanged. We therefore consider that the value for a seller experiencing private misvaluation is modified by adding the constant δs which represents the level of over (if positive) or under-valuation if negative. As a consequence 1
Following Squintani in [7] we consider two situations in our paper depending on whether the agents try to rationalize or not the signals they receive.
304
S. Bourgeois-Gironde and M. Czupryna
vs + δs is uniformly distributed over the interval v s + δs , v s + δs . Analogously buyer vb + δb is uniformly distributed over the interval the value for a misvaluing v b + δb , v b + δb . As explained above, we also consider that sellers and buyers can make predictions about the degree to which their partner succumbs to private misvaluation. In doing so they are liable amplify or underestimate that phenomenon. This possibility defines what we label second-order misvaluation. Formally, we introduce the constant δs as parameter that represents how a seller estimates the buyer’s misvaluation. In particular a seller thinks that the buyer’s value is uniformly distributed over the interval v b + δs , v b + δs . Analogously that the seller’s the buyer thinks value is uniformly distributed over the interval v s + δb , v s + δb . We may thus define the following measures: 1. δs which is a seller’s misvaluation level 2. δb which is a buyer’s misvaluation level 3. δs − δb which is the difference between a buyer’s misvaluation level as estimated by a seller and an actual misvaluation level of a buyer—which can be defined as the seller’s objective second order misvaluation level 4. δb − δs which, reciprocally, is the difference between a seller’s misvaluation level as estimated by a buyer and an actual misvaluation level of a seller—which refers to the buyer’s objective second order misvaluation level 5. δs − δs which is the difference between the misvaluation level of a buyer estimated by a seller and a seller’s own misvaluation level—which, in our terms, is the seller’s subjective second order misvaluation level 6. δb − δb which is the difference between the misvaluation level of a seller estimated by a buyer and a buyer’s own misvaluation level—in other terms, the buyer’s subjective second order misvaluation level If the above defined measures assume positive value we may observe overvaluation, accordingly under-valuation for negative values. Inspired by Squintani’s paper, see [7], we firstly assume that both seller and buyer may be mistaken but do not realise it. Therefore each of them will play according to the strategy defined in Sect. 2.2. The players play according to their beliefs about own and the trading partner value distributions. We also consider two kind of equilibria: naive and sophisticated. The first equilibrium class it is assumed that the players play rational by simply adapting to the observed strategy of the trading partners but try not to infer consequences from these observations about their true valuation levels and beliefs. The deduction and the following revision of the players’ beliefs is considered in the sophisticated equilibrium class.2
2
In the simple case discussed in this paper when the updating process keeps the difference between first and second order misvaluation levels constant there will be no difference in bid and ask prices for these two updating strategies.
On the Impact of Misvaluation on Bilateral Trading
305
2.2 Trading Equilibrium The equilibrium offer strategy of the seller is defined as follows. For vs satisfying the condition given in Eq. 1. vs
2−k k 2−k δs − δs vb + vs + 2 2 2
(5)
the offer is given in Eq. 6. s(vs ) ≥
1 1−k k (1 − k) 1−k 1+k vs + v + δ + δs vb + 2−k 2 2 (2 − k) s 2 s 2
(6)
Analogously, the strategy for a buyer is presented below. For vb satisfying the condition given in Eq. 7. vb ≤
1−k 1+k 1+k vs + δb − δb vb + 2 2 2
(7)
the bid is given in Eq. 8. b(vb ) ≤
1 k (1 − k) k k 2−k vb + δb v b + v s + δb + 1+k 2 (1 + k) 2 2 2
(8)
306
S. Bourgeois-Gironde and M. Czupryna
For vs satisfying the condition given in Eq. 9. 1−k 1+k 1+k δb − δb ≤ vb ≤ vb + vs + 2 2 2 1+k 1−k k (k + 1) 1+k vs + δb − δb (9) vs + vb − 2−k 2 2 (2 − k) 2 The value of a bid is given in Eq. 10. b(vb ) =
1 k (1 − k) k k 2−k vb + δb v b + v s + δb + 1+k 2 (1 + k) 2 2 2
(10)
And finally for vb satisfying the condition given in Eq. 11. vb >
1+k 1−k k (k + 1) 1+k vs + δb − δb vs + vb − 2−k 2 2 (2 − k) 2
(11)
the offer is given in Eq. 12. b(vb ) =
1 1−k k (1 − k) 1+k 1−k v + vs + vb + δ + δb 2−k 2 2 (2 − k) s 2 b 2
(12)
Let us now consider as an example, the differences between expected and quoted price, first by a seller and then by a buyer. The seller will trade the following price, see Eq. 13: s(vs ) =
k (1 − k) 1 1−k v + δs v b + δs + (vs + δs ) + 2−k 2 2 (2 − k) s
(13)
However the buyer thinks that a trader would be playing, see Eq. 14: b (vs ) =
1−k 1 k (1 − k) vs + δb + v s + δb (v b + δb ) + 2−k 2 2 (2 − k)
(14)
Thus the difference between the seller’s real offer and the buyer’s expectations can be expressed using the formula in Eq. 15 and is the weighted sum of seller’s and buyer’s second order confidence levels. s(vs ) − b (vs ) =
k+1 1−k δs − δb + δs − δb 2 2
(15)
Similarly the difference between the buyer’s real offer and the seller’s expectations can be expressed using the formula in Eq. 16 and is the weighted sum of seller’s and buyer’s second order confidence levels. b(vb ) − s (vb ) =
k 2−k δb − δs + δb − δs 2 2
(16)
On the Impact of Misvaluation on Bilateral Trading
307
2.3 Trading Dynamics We have already assumed that both a seller and a buyer will play according to the strategy defined in Eqs. 1–12 according to their beliefs about their own and the trading partner’s value distributions. We analyse now when agents can realise that assumptions about their value or their expectations about the value of the trading partner may be wrong. We will consider four cases: – – – –
a buyer quotes a higher price than the price expected by a seller a buyer quotes a lower price than the price expected by a seller a seller quotes a lower price than the price expected by a buyer a seller quotes a higher price than the price expected by a buyer.
Let us also assume that buyers and sellers belief that the ranges containing potential values for buyers and sellers intersect. This assumption lead to the following conditions 17 and 18 are satisfied: v b − v s > δs − δs > v b − v s
(17)
v b − v s > δb − δb > v b − v s
(18)
These assumptions are not excessively restrictive and simplify the analysis of the dynamics. Buyer quotes a higher price than the price expected by seller Sellers firstly consider, which of the two potential prices: all buyers’ or all sellers’ maximum prices is higher based on their beliefs. The former is satisfied when the condition given in Eq. 19 is met. vb −
2 k v ≥ δs − δs vs + 2−k 2−k s
(19)
In such a case, a seller believes that a buyer should not quote a price higher than what the buyer considers as the possible maximum sell price quoted by any seller.3 Let us now consider the situation, when the condition given in Eq. 19 is not met. In such a case a seller believes that a buyer should not quote a price higher than the possible maximum buy price quoted by any buyer.4 Let Δ denote the difference between actual and expected by a seller bid price. Sellers will increase their offers by Δ × β 5 without changing the values of δs and 3
This is the price, from a seller’s perspective, that would be quoted by a seller having a maximum possible private value. The value can be calculated by substituting a value v s for a variable vs in Eq. 4. 4 This is the price, from a seller’s perspective, that would be quoted by a buyer having a maximum possible private value. The value can be calculated by substituting a value v b for a variable vb in Eq. 10 and replacing δb with δs and δb with δs . 5 The parameter β represents the stubbornness of a seller.
308
S. Bourgeois-Gironde and M. Czupryna
δs when they use the naive updating strategy. On the other hand sellers will increase the values of δs and δs by Δ × β when they use sophisticated strategy. In is also worth mentioning that both updating strategies, the naive and sophisticated lead to the sellers’ ask prices increase by Δ but to differences in situations when the border cases, discussed in Sect. 2.2, are applied. The updating process in the next paragraphs is analogous to the one described here. Buyer quotes a lower price than the price expected by seller Sellers firstly consider, which of the two potential prices: all buyers’ or all sellers’ minimum prices is lower based on their beliefs. The former is satisfied when the condition given in Eq. 20 is met. vs +
1−k 2 v > δs − δs vb − 1+k 1+k b
(20)
In such a case a seller knows that a buyer may bid any price, see Eq. 8 and does not update. Let us now consider the situation, when a condition given in Eq. 20 is not met. In such a case a seller knows that a buyer should not quote a price lower than the possible minimum buy price quoted by any buyer.6 Seller quotes a higher price than the price expected by buyer Buyers firstly consider, which of the two potential prices: all buyers’ or all sellers’ maximum prices is higher based on their beliefs. The former is satisfied when the condition given in Eq. 21 is met. vb −
2 k v ≥ δb − δb vs + 2−k 2−k s
(21)
In such a case a buyer knows that a seller should not quote a price higher than the possible maximum sell price quoted by any seller. Let us now consider the situation, when a condition given in Eq. 21 is not met. A buyer knows then, that a seller may bid any price, see Eq. 6 and does not update. Seller quotes a lower price than the price expected by buyer Buyers firstly consider, which of the two potential prices: all buyers’ or all sellers’ maximum prices is lower based on their beliefs. The former is satisfied when the condition given in Eq. 22 is met. vs +
1−k 2 v > δb − δb vb − 1+k 1+k b
(22)
In such a case, a buyer believes that a seller should not quote a price lower than the possible minimum sell price quoted by any seller. Let us now consider the situation, when the condition given in Eq. 22 is not met. In such a case a buyer believes that a seller should not quote a price lower than what a seller considers as the possible minimum buy price quoted by any buyer. 6
This is the price, from a seller’s perspective, that would be quoted by a buyer having a minimum possible private value. The value can be calculated by substituting a value v b for a variable vb in Eq. 10 and replacing δb with δs and δb with δs .
On the Impact of Misvaluation on Bilateral Trading
309
3 Agent Based Approach 3.1 Model We analyse the effects of biases represented by the parameters δs , δs , δb , δb on trading probability and the observed mean price by the means of simulation. The results in some special cases may be straightforward. For example when all parameters assume the same value δs = δs = δb = δb the trade probability remains unchanged. For more general results and a systematic numerical exploration of the effects of our different configurations of overconfidence on trading we used simulation approach. For this purpose we assume that each of the parameters δs , δs , δb , δb can vary in the range [0, 0.5], β in range [0.25, 1] and k in the range [0.25, 0.75]. For each seller the value vs is drawn from the uniform distribution over the interval γs , 1 + γs , whereas for each buyer the value vb is drawn from the uniform distribution over the interval γs + γb , 1 + γs + γb . Both γs and γb are the simulation parameters that may assume values in the range [0, 1] and [0, 0.5] respectively. Moreover we consider the following two strategy updating modes, represented by the binary variable u: naive u = 1 and sophisticated u = 0. As defined in Eq. 6 in some special cases sellers may propose any price, provided it is high enough and analogously in Eq. 8 buyers may propose any price provided it is low enough. We model this by assuming that the differences to the prices that satisfy the relevant conditions with equality are drawn randomly from the uniformly distributed random variable that assume values in the range [0, η]. The parameter η may admit values within the range [0, 0.2]. The model is implemented in Java, using the MASON 19 framework. We represented 2000 agents in the model, divided into 1000 buyers and 1000 sellers. We also analysed 500 different parameter sets obtained by systematically searching the parameter space using the Sobol numbers [2, 4] and two updating procedures: naive and sophisticated applied by the agents. For each of these 1000 (2 × 500) parameter sets we additionally ran the base case scenario, with the values of misvaluation related parameters set to zero and all other parameters’ values being the same. We ran 8 simulations for each parameter set, thus we have explored 16000 simulations in total. We ran 100 simulation steps for each parameter set. In each step buyers and sellers are randomly matched in pairs and make offers (bargain). If the bid price is higher or equal the sell price the trade takes place. After the bargaining process the agents update their strategy and may trade again in the next simulation step.
3.2 Results We first analyse how different forms of misvaluation may influence the trading frequency by the means of a linear regression. For this purpose we build the differences
310
S. Bourgeois-Gironde and M. Czupryna
Table 1 The effect of misvaluation on trading frequency Parameter Estimate Std. error t value (Intercept) γs γb δb δb δs δs η k β u
−0.0508 0.0080 0.0833 0.4811 −0.2420 −0.4521 0.2116 0.0401 0.0157 −0.0259 0.0126
0.0054 0.0028 0.0056 0.0056 0.0056 0.0056 0.0056 0.0139 0.0056 0.0037 0.0016
Table 2 The effect of misvaluation on the mean price Parameter Estimate Std. error (Intercept) γs γb δb δb δs δs η k β u
0.0759 −0.0283 −0.0373 0.9470 −0.5950 1.0913 −0.5645 0.0773 −0.1456 0.1239 −0.0219
0.0230 0.0119 0.0239 0.0238 0.0239 0.0239 0.0239 0.0596 0.0239 0.0159 0.0069
Pr(>|t|)
−9.45 2.86 14.94 86.51 −43.42 −81.11 37.98 2.88 2.82 −6.97 7.82
0.0000 0.0042 0.0000 0.0000 0.0000 0.0000 0.0000 0.0040 0.0049 0.0000 0.0000
t value
Pr(>|t|)
3.30 −2.38 −1.56 39.75 −24.92 45.69 −23.64 1.30 −6.09 7.78 −3.19
0.0010 0.0175 0.1183 0.0000 0.0000 0.0000 0.0000 0.1945 0.0000 0.0000 0.0014
between simulated mean price7 and trading frequency8 in the last simulation step of these scenarios that admit misvaluation and corresponding to them base case scenarios that admit no misvaluation. The results are presented in Tables 1 and 2. We can observe that for buyers, first order over-valuation increases, while second order over-valuation decreases both the trading probability and the trading price. For the sellers, first order over-valuation decreases the trading probability but increases the trading price, whereas second order over-valuation has a contrary effect, namely increase the trading frequency but decreases the mean trading price. 7
Mean price is calculated based on the prices of successful trades. Trading frequency is defined as the ratio of the buyers (or sellers) that successfully traded to the total number of buyers (respectively sellers).
8
On the Impact of Misvaluation on Bilateral Trading
(a)
(b)
311
(c)
(d)
Fig. 1 Trading frequencies and mean price changes for different updating modes, from left to right, naive and sophisticated
(a)
(b)
(c)
(d)
Fig. 2 Misvaluation levels changes between first and last simulation step, for sellers and buyers for different updating modes, from left to right, naive and sophisticated
We also compared the results obtained in the last and in the first simulation step for each parameter set (excluding base scenario cases). The differences between trading frequencies and mean transactional price for the simulations with different parameter sets are shown in Fig. 1. We can observe, see Fig. 1, that for both naive and sophisticated modes the misvaluation and its subsequent updating may lead to significant changes in trading frequencies but not for trading prices in most of the scenarios considered. The differences between misvaluation levels—in the last and in the first simulation step—for the simulations with different parameter sets are shown in Fig. 2. We can observe, see Fig. 2, that for naive updating mode the misvaluation updating process is limited. On the other hand, the sophisticated updating of the misvaluation levels, which are measured here with the mean absolute value of sellers’ (respectively buyers’) objective second order misvaluation levels (|δs − δb | and respectively |δb − δs |) may also lead to significant decrease or increase of the misvaluation levels depending on the scenario. In order to verify which parameters may lead to increase, respectively decrease of the misvaluation levels we run the linear regression analysis. The results are presented in Tables 3 and 4. We can observe for sellers, that the higher the sellers’ initial misvaluation level (measured here as the mean absolute value of sellers’ objective second order misvaluation level) the more effective is the learning process (the misvaluation level tend to decrease). On the other hand, the higher the buyers’ initial misvaluation level the less effective is the sellers’ learning process (the misvaluation level tend to increase). The analogous effect can be observed for the buyers.
312
S. Bourgeois-Gironde and M. Czupryna
Table 3 The effects of parameters on misvaluation levels’ change during updating process for sellers Parameter Estimate Std. error t value Pr(>|t|) (Intercept) γs γb |δs − δb | |δb − δs | η k β
0.0198 −0.0074 −0.0339 −0.4745 0.3225 0.0448 −0.0674 0.0375
0.0062 0.0036 0.0073 0.0088 0.0088 0.0181 0.0073 0.0048
3.20 −2.04 −4.67 −53.65 36.52 2.47 −9.28 7.76
0.0014 0.0415 0.0000 0.0000 0.0000 0.0135 0.0000 0.0000
Table 4 The effects of parameters on misvaluation levels’ change during updating process for buyers Parameter Estimate Std. error t value Pr(>|t|) (Intercept) γs γb |δs − δb | |δb − δs | η k β
−0.0214 0.0094 −0.0279 0.2681 −0.4945 0.0365 0.0462 0.0093
0.0059 0.0034 0.0069 0.0084 0.0083 0.0171 0.0069 0.0046
−3.65 2.75 −4.06 32.06 −59.22 2.14 6.73 2.03
0.0003 0.0060 0.0000 0.0000 0.0000 0.0328 0.0000 0.0423
4 Conclusions It was well known, since the seminal work of Chatterjee and Samuelson (1983) in [3] that bilateral trade under incomplete information cannot yield at equilibrium all the benefits from trade. We kept up with this basic framework to study the effects of private valuation on the probability of trade and trading prices when the two players, the seller and the buyer, can be erroneous about their own valuation of a good and about the way the other agent may commit a similar error. This phenomenon of subjective error can arise with respect to certain type of goods when all the intrinsic properties of that good cannot still be known at the moment of trading. Our setting allowed for a handy definition of first-order and second-order misvaluation. In our model, agents can be biased about the ways the other agents are biased, which may count as a specific source of extended or contracted opportunities for trade. Gizatulina and Hellman (2019) in [5], discussing the limits to the no-trade theorem, have shown that if one of the agents puts some slight probability on the other agent being irrational, there appears some conditions under which exchange becomes formally possible. Our agents accentuate this feature because they may also be mistaken in the way others can
On the Impact of Misvaluation on Bilateral Trading
313
be mistaken. And we indeed observed that in the case of the buyers being subject to first-order (second-order misvaluation) the trading probability and price significantly increase (decrease). For sellers, on the other side, the first order misvaluation tends to decrease the trading frequency but to increase the trading price in a statistically significant way. The second order misvaluation has an opposite effect.
References 1. Acemo˘glu, D., Como, G., Fagnani, F., Ozdaglar, A.: Opinion fluctuations and disagreement in social networks. Math. Oper. Res. 38(1), 1–27 (2013) 2. Bratley, P., Fox, B.L.: Algorithm 659: implementing Sobol’s quasirandom sequence generator. ACM Trans. Math. Softw. (TOMS) 14(1), 88–100 (1988) 3. Chatterjee, K., Samuelson, W.: Bargaining under incomplete information. Oper. Res. 31(5), 835–851 (1983) 4. Christophe, D., Petr, S.: randtoolbox: generating and testing random numbers. Austria, Vienna (2014) 5. Gizatulina, A., Hellman, Z.: No trade and yes trade theorems for heterogeneous priors. J. Econ. Theory 182, 161–184 (2019) 6. Myerson, R.B., Satterthwaite, M.A.: Efficient mechanisms for bilateral trading. J. Econ. Theory 29(2), 265–281 (1983) 7. Squintani, F.: Mistaken self-perception and equilibrium. Econ. Theory 27(3), 615–641 (2006)
Consumer Participation in Demand Response Programs: Development of a Consumat-Based Toy Model Judith Schwarzer and Dominik Engel
Abstract Modeling of the smart grid architecture and its subsystems is a basic requirement for the success of these new technologies to address climate change effects. For a comprehensive research especially on effects of demand response systems, the integration of consumers’ decisions and interactions is essential. To model consumer participation in demand response programs this paper introduces an agent-based approach using the Consumat framework. The implementation in NetLogo provides high scalability and flexibility concerning input parameters and can easily interact with other simulation frameworks. It also forms a possible basis for an overall demand response consumer model. As a so-called toy model, simple correlations in this socio-technical scenario can already be explored. Keywords Consumat · Demand response · Toy model · Agent
1 Introduction The establishment of demand response systems as a key application of smart grid architectures represents one of the most important measures to address climate change effects. The corresponding technologies have to be enabled in the residential sector to meet the European targets for a reduction of greenhouse gas emissions by 2030 (40% compared to 1990) and a greater share of renewable energy of at least 27% [1]. Demand response in this context refers to “changes in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to induce lower electricity use at times of high wholesale market prices or when system reliability is jeopardized” [2]. Based on data from the US energy market (2014) demand response in the residential sector contributes 20% of the total peak demand savings and 61% of the overall energy savings [3]. J. Schwarzer (B) · D. Engel Centre for Secure Energy Informatics, Salzburg University of Applied Sciences, Urstein Sued 1, A–5412 Puch, Salzburg, Austria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_24
315
316
J. Schwarzer and D. Engel
Fig. 1 Consumer decisions in the context of demand response
As shown in, e.g., [4, 5], the success of a demand response program essentially depends on the end consumers’ participation and their behavior when configuring and using a DR system. Based on an own comprehensive structural analysis of the corresponding complex socio-technical system [6] Fig. 1 gives an overview on relevant consumer decisions in this context. A simulation model that integrates all these aspects would be very helpful to support the deployment of a new energy infrastructure. Analyzing such sociotechnical systems is a major research field in social sciences and agent-based models can be considered as a preferred simulation tool (see, e.g., [7, 8, 9]). A general concept to model the consumers’ behavior was developed by our group and presented in [10] focusing on the long/mid-term decision concerning general participation in a demand response program (see Fig. 1). This work is based on the Consumat framework of Jager and Janssen first published in [11]. Several other publications already exist which use this approach to model sustainable behaviors but also other types of decision making like farmer crop choices (see, e.g., [12, 13]). Based on [14] also the perspective of innovation diffusion and transition theories (e.g., Rogers theory on innovation diffusion, as cited in [14]) can be considered with Consumat. The aim of this work is to extend and refine the existing approach and to implement it in a simulation environment. As a so-called toy model, it may support the finding of simple correlations in the complex socio-technical demand response system and provides the basis for further implementation in an overall socio-technical demand response consumer model. The following Sect. 2 first introduces relevant demand response knowledge especially concerning the role of consumers. After a brief general overview on human decisions in agent-based models, in Sect. 2.2 the Consumat framework is shortly described and some outcomes of relevant existing implementations are presented. The ODD+D based model description and results of the implementation can be found in the following two main Sects. 3 and 4. A summary and an outlook on future work is given in the final Sect. 5.
Consumer Participation in Demand Response Programs …
317
2 Related Work The section on the theoretical background of this work is divided in two parts: (1) a short review on demand response models with focus on the role of consumers and (2) an introduction to the Consumat framework including some background knowledge on human decision making in agent-based models.
2.1 The Role of Consumers in Demand Response Models There exists a huge amount of publications about demand response systems and corresponding models to simulate their efficiency and their role in future energy systems. An overview is, e.g., given in [15]. The authors classify demand response programs into different categories based on classifiers like control mechanism and motivations offered to consumers. The latter one includes different pricing schemes (pricebased or incentive-based) as strategies to motivate consumers to the desired behavior related to demand response. Potential actions are reduction or time shifting of electricity usage, known as direct load control and reducing the peak to average ratio, respectively. Load management can be performed either in a multi-user scenario, where the schedules of energy consumption will be optimized for a group of users (see, e.g., [16]) or in a single-user scenarios (see, e.g., [17]). Considering the large number of contributions on demand response, the following algorithm classes are frequently employed: game theory, linear programming, particle swarm optimization, arrival processes and multi-agent based models. In our meta-analysis evaluating the data communication requirements of common demand response models a more detailed overview can be found [18]. Demand response decisions are not made by the consumers in a case-to-case-manner but usually an algorithm implemented in a technical demand response system optimizes the performed actions. Nevertheless, corresponding modeling and simulation approaches in most cases require presumptions concerning consumers’ decisions and behavior. There are some proposals, which explicitly integrate this perspective. As one of the relevant aspects, the preferences of optimal appliance scheduling are one focus of the approaches presented, e.g., in [19, 20]. In most of the considered studies related to consumers in demand response scenarios the main focus is on dynamic short-term behavior concerning load management itself. Regarding the relevance of the mid/long-term decision to even participate in such a program Miller et al. [5] show the high impact of humans’ decision to participate in a direct load control program. This finding could be confirmed by our own simulations where the role of user interaction and acceptance for a cloud-based DR model has been investigated [4]. It was found that the number of participating users has a strong effect on cost cutting for a certain load reduction. Within this setup the user acceptance did not increase with more configuration options and higher amount of possible user interactions. In order to avoid complex configuration of a demand
318
J. Schwarzer and D. Engel
response system with autonomous appliance scheduling, as, e.g., proposed in [21], there is no need of user interaction. In this model, time of use probabilities of the appliances will be learned automatically from energy consumption patterns under varying weather conditions, day of week, etc. The method proposed in [22] also uses such a forecasting approach.
2.2 The Consumat Approach Human decisions in agent-based models. Agent-based models address a wide range of simulation challenges in very different research areas. They are used both for social simulations and for models focusing on technical aspects. This is possible due to the generic characteristic of multi-agent systems: they are particularly suited for situations characterized by autonomous entities whose actions and interactions determine the overall system [23]. For simulating human systems with agent-based modeling, Bonabeau states the following three benefits in [8]: (1) it captures emergent phenomena, (2) provides a natural description of the system and (3) is flexible. In general, agent-based modeling has been considered as a promising methodology for social science research in the last two decades (see, e.g., [24]). Different frameworks exist to integrate the process of human decision making in agent-based models. They differ in aspects like level of complexity, research questions that may be answered with their help and psychological background. In [25] five main dimensions are distinguished to classify human agent architectures: • • • • •
Cognitive level (reactive, deliberative…) Affective level (representation of emotions) Social level (representation of complex social concepts, status…) Norm consideration (agents’ ability to reason about social and formal norms) Support of learning.
Using these description categories, the Consumat approach, which is used in this project, simulates reactive/deliberative agents, who are able to consider values and morality on the affective level. Their social focus is on success comparison with others. Consumat agents are able to learn and norms may be represented as model input parameters. Background to Consumat and related implementations. Consumat is a sociopsychological framework which allows the agent-based simulation of human decision making in situations related to consumption of goods or opportunities such as doing a specific activity, deciding where to live, and others. Details of the model and its updates as well as the underlying theoretical background can be found in [11, 26, 27, 28]. In [12] different applications of the Consumat approach are discussed. Some results related to consumer behavior are briefly described below:
Consumer Participation in Demand Response Programs …
319
Household lighting. Based on a Consumat model the purchase decision concerning lighting technology were simulated in order to explore different policies for an increased market share of LED lamps. The observed behavior show that the pure appearance of a new product on the market does not strongly influence the consumers’ decision but additional incentives do. Diffusion of electric car. Similar simulations were made to investigate the diffusion of electric cars using policies, such as taxing fuel cars and subsidising electric cars. The results generally show the slowness of that process and indicate the high relevance of an optimal mixture and timing of different policies. The developers of the Consumat framework used their approach to model the diffusion of green products with low environmental impacts simulating the behavior of both, consumers and firms [14]. The results represent the high relevance of social interactions and also reproduce empirical data. Also in [29] scenarios of green consumption are modeled with Consumat. The authors mainly explored effects of increasing prices for non-green products and an increasing environmental awareness of the consumers. Studies like [13, 30] confirm the suitability of the Consumat framework to analyse and optimize policies and other measures to improve the market share of green products and services. In [10] the general concept of modeling demand response consumers as ‘consumats’ is already presented. Relevant details of the framework related to its application in this socio-technical environment will be considered within the model description (see Sect. 3.2).
3 Demand Response Consumers as ‘Consumats’: Model Description In this section, the application of the Consumat model to simulate the decision of consumers concerning participation in a demand response program is presented. The model description is based on ODD+D which extends the original Overview, Design Concepts and Detail (ODD) protocol with human decision-making aspects [31].
3.1 Overview Purpose. This model aims to represent the decision-making of consumers to generally participate in demand response programs. Depending on the selected decision strategy other agents’ behavior may be integrated into the corresponding cognitive process. The model was created to prove the suitability of the Consumat framework within this context and to identify relevant dependencies of the variables for further research.
320
J. Schwarzer and D. Engel
Entities, state variables and scales. The model includes agents representing demand response consumers and the human and natural environment. Agents can decide to generally participate in demand response programs or not. They are characterized by individual levels of need satisfaction concerning their financial, personal and social state. The discrete time steps of the model are called ticks. Typical time scales for one tick can be daily to weekly. Process overview and scheduling. With each tick, agents make their decision concerning participation and all attributes and parameters will be updated.
3.2 Design Concepts Theoretical and empirical background. The agents’ decision-making is based on the Consumat approach due to its ability to simulate consumers’ behavior in different domains including social aspects (see Sect. 2.2). The simulated consumers have existential, social and personal needs and they are equipped with abilities and opportunities to satisfy these needs with a certain behavior. The corresponding decision strategies are: • Optimization: maximize the level of need satisfaction (LNS) based on own calculations • Inquiring: check behavior of peers, compare possible imitation of their decision with own calculations to decide for maximum LNS • Repetition: repeat decision of last tick • Imitation: copy last behavior of peers (peers: agents with similar attributes). The Consumat approach also integrates uncertainty of an agent as a relevant factor for decision making which leads to the following key rules for the engagement in a specific cognitive process [28]: • with decreasing satisfaction, an agent accepts more effort to find the optimal behavioral option • with increasing uncertainty, the behavior of other agents becomes more relevant. Figure 2 illustrates the adaption of the underlying Consumat model (see [26, 28]) on the decision behavior of a demand response consumer. Based on own results published in [4, 10, 32, 33], the following driving forces on the micro level were identified and implemented: • Needs: financial state, personal state (comfort and environment), social state • Opportunities and abilities: participation in demand response program • Uncertainty. More detailed descriptions of individual decision making will follow in the subsection below.
Consumer Participation in Demand Response Programs …
321
Fig. 2 Conceptual demand response consumer model based on Consumat
Interactions and individual decision-making, sensing and prediction. In the original model the engagement in one of the four decision rules depends on current uncertainty of the agent and its level of need satisfaction (LNS). In [26] uncertainty is described as the difference between expectations and the real outcome of an action. The updated version of the Consumat framework [28] directly couples uncertainty to the existence and social needs. With Consumat II different uncertainties concerning the several needs may have different weights within the overall uncertainty. However, the authors of [13] state that househoulders rather consider inconsistencies between needs and its satisfaction level than perform a statistical evaluation of uncertainty. Due to the similarity of the research domain (residential energy efficiency), we transfer this criteria-based approach and define the following rules to select the suitable decision mode within the model: • For each of the three needs (financial, social, personal) a threshold of the LNS is defined as a criterion which is met or not • The overall satisfaction and uncertainty of an agent depend on the fulfillment of these criteria • The selection of a decision strategy is based on satisfaction and uncertainty following the assumptions of the original Consumat approach (see also Fig. 2). This leads to the following logic (see Table 1). Heterogeneity, stochasticity and observations. For most of the agent’s own parameters, both a global and an individual randomized configuration is possible. For details on the different performed simulation runs and the output data analysis, see Sect. 4.
322
J. Schwarzer and D. Engel
Table 1 Criteria-based agent’s decision logic No. of fulfilled criteria
0
1
2
3
Level of satisfaction
Unsatisfied
Unsatisfied
Satisfied
Satisfied
Level of uncertainty
Certain
Uncertain
Uncertain
Certain
Decision strategy
Optimization
Inquiry
Imitation
Repetition
3.3 Details Implementation, initialisation and data input. The model is implemented in NetLogo version 6.0.4. The presented code is roughly based on an existing Consumat implementation [34] and available at https://www.en-trust.at/downloads/. An initial setup procedure activates all parameters. If heterogeneity/variability is activated, it calculates the individual variables within configurable ranges. Although the current version of the model does not integrate import of data from external files this option could be easily included. Submodels. Within this section model parameters and submodels are described. Model parameters. Table 2 gives an overview on the relevant parameters used in the model. Needs. At each tick an agent calculates its overall level of need satisfaction (LNS) as a sum of the existential (financial), personal and social need: L N S = L N S f in + L N S per s + L N S soc
(1)
L N S f in = partici pation ∗ D R income
(2)
with
Table 2 Model parameters Variable
Scope
Range/condition
Number of agents
Global
Natural number
Initial participants
Global
Percent of number of agents
Configurable during setup Profit of participation
DRincome
Global
0…1
Initial participation
Individual
0 or 1
γneed
Individual
0…1, with γcomfort + γenviron = 1 γsimilar + γsuperior = 1
Explanation
Weight of a certain need, randomly assigned during setup
Consumer Participation in Demand Response Programs …
LNS per s = LNSenvir onment + LNScom f or t =
γcom f or t , partici pation = 0 γenvir on , partici pation = 1
L N S soc = γsimilar ∗ need similar + γsuperior ∗ need superior
323
(3) (4)
The level of social need satisfaction is composed of the agent’s need of being similar respectively superior compared to its peers/other agents balanced by individual weighting factors γsimilar and γsuperior: need similar = 1 − abs( partici pation own − mean( partici pation peer s ) need superior = abs( partici pation own − mean( partici pation all )
(5) (6)
Agent’s behavior. In order to improve its individual level of need satisfaction, an agent evaluates its participation decision at each tick. The underlying strategy for the new decision is based on the fulfillment of three criteria concerning thresholds for the financial, personal and social need satisfaction (see Table 1). Each of these LNS values is always between 0 and 1. Due to the fact that individual preferences are already represented by the weighting factors the criteria are defined as fulfilled when the LNS value of the need is above or equal 0.5. Nevertheless, the model is also suitable for individual and variable threshold settings.
4 Results To check the general suitability and logical correctness of the model, two main aspects were investigated: (1) variation of the agents’ general behavior in time and (2) influence of varying input parameters on participation decision.
4.1 Variation in Time Figure 3 exemplarily shows the participation behavior of 500 consumers over 30 time steps as provided by the NetLogo interface tab. The DR incentive was set to DRincome = 0.2, during initialization 25% of agents were configured with participation = YES and the individual weights of the needs were randomly distributed (0…1). With this parameter setup, the percentage of agent participation and the related decision strategies are already stable after five ticks of the simulation run. Other parameter configurations also show this short warm up period.
324
J. Schwarzer and D. Engel
Fig. 3 Variation of behavior in time
4.2 Varying Input Parameters The NetLogo tool “BehaviorSpace” allows to run a model systematically with varying parameter settings and to report selected variables after each run. Using this tool, a broad range of value combinations were simulated and analyzed. Figure 4 exemplarily shows the percentage of agent participation depending on the DR income under several conditions (varying weights of the needs and different initial participation distributions). Each simulation run was performed five times with identical settings and the measured value reported after 30 ticks (see warm up period in Fig. 3). The graphs visualize logical effects of increasing income (higher participation rates) but also the importance of different weights of the agent’s needs. In the example a high importance of comfort needs (compared to environmental needs) has a high
Fig. 4 Participation of 500 agents depending on DRincome , initial participation and weights
Consumer Participation in Demand Response Programs …
325
influence on participation, especially when the initial participation is low (see the two lower graphs: no participation when income and/or initial participation is low). All graphs show a broad distribution of the results for an initial participation of 50%. Additional simulation runs in this parameter range confirmed a very high sensitivity of the final participation for the randomized initial settings.
5 Conclusion This work presents the development and implementation of an agent-based model of consumer participation in demand response programs based on the Consumat approach. At the current state of the project, the model provides first reproduceable results for a large variety of parameter settings. As a so-called toy model, it can be already used to find relevant correlations. Due to the simple and scalable parameter definitions, the model can easily be calibrated and validated based on empirical data. Additionally, considering the general participation in DR programs as a diffusion process offers the application of corresponding innovation and transition theories (for an overview see e.g., [35]). The model itself is scalable and can be extended by an additional logic considering the short-term aspects of consumers’ interactions in the context of demand response (see Fig. 1). The underlying NetLogo tool allows interaction with other simulation frameworks like, e.g., mosaik. Future work will focus on two aspects: (1) improve, refine and validate the Consumat approach to model consumer participation in demand response programs and (2) integrate it in an overall model of consumer decisions in the demand response context. In a final state the model should provide quantifiable results for the optimal adjustment of incentives both for general participations and short-term energy price adaptions.
References 1. European Commission: COM(2014) 15 final: A policy framework for climate and energy in the period from 2020 to 2030, no. 2014, pp. 1–18 (2012) 2. Federal Energy Regulatory Commission: Assessment of Demand Response & Advanced Metering (2006) 3. Comstock, O.: Demand response saves electricity during times of high demand (2016). https:// www.eia.gov/todayinenergy/detail.php?id=24872. Accessed 10 Jun 2020 4. Schwarzer, J., Kiefel, A., Engel, D.: The role of user interaction and acceptance in a cloudbased demand response model. In: IECON Proceedings (Industrial Electronics Conference) (2013) 5. Miller, M.Z., Griendling, K., Mavris, D.N.: Exploring human factors effects in the Smart Grid system of systems Demand Response. In: 2012 7th International Conference on System of Systems Engineering (SoSE), pp. 1–6 (2012) 6. Schwarzer, J., Engel, D., Lehnhoff, S.: Conceptual design of an agent-based socio-technical demand response consumer model. In: International Conference on Industrial Informatics, pp. 680–685 (2018)
326
J. Schwarzer and D. Engel
7. Moglia, M., Cook, S., McGregor, J.: A review of agent-based modelling of technology diffusion with special reference to residential energy efficiency. Sustain. Cities Soc. 31, 173–182 (2017) 8. Bonabeau, E.: Agent-Based Modeling: Methods and Techniques for Simulating Human Systems, vol. 99, pp. 7280–7287 (2002) 9. Le Page, C., Bazile, D., Becu, N., Bommel, P.: Agent-based modelling and simulation applied to environmental management. In: Edmonds, B., Meyer, R. (Eds.) Simulating Social Complexity. Springer (2013) 10. Schwarzer, J., Engel, D.: Agent-based modeling of consumer participation in demand response programs with the consumat framework. In: Abstracts from the 9th DACH+ Conference on Energy Informatics, vol. 3, no. 27, pp. 13–15 (2020) 11. Jager, W., Janssen, M.A., Vlek, C.A.J.: Consumats in a commons dilemma: testing the behavioural rules of simulated consumers (1999) 12. Schaat, S., Jager, W., Dickert, S.: Psychologically plausible models in agent-based simulations of sustainable behavior. In: Alonso-Betanzos, A., Sánchez-Maroño, N., Fontenla-Romero, O., Polhill, J.G., Craig, T., Bajo, J., Corchado, J.M. (eds.) Agent-Based Modeling of Sustainable Behaviors, pp. 1–25. Springer International Publishing, Cham (2017) 13. Moglia, M., Podkalicka, A., Mcgregor, J.: An agent-based model of residential energy efficiency adoption. J. Artif. Soc. Soc. Simul. 21(3), 26 (2018) 14. Janssen, M.A., Jager, W.: Stimulating diffusion of green products—co-evolution between firms and consumers. J. Evol. Econ. 12(3), 283–306 (2002) 15. Vardakas, J.S., Zorba, N., Verikoukis, C.V.: A survey on demand response programs in smart grids: pricing methods and optimization algorithms. IEEE Commun. Surv. Tutorials 17(1), 152–178 (2015) 16. Kim, H., Kim, Y.J., Yang, K., Thottan, M.: Cloud-based demand response for smart grid: Architecture and distributed algorithms. In: 2011 IEEE International Conference on Smart Grid Communication, pp. 398–403 (2011) 17. Barbato, A., Capone, A., Carello, G., Delfanti, M., Merlo, M., Zaminga, A.: House energy demand optimization in single and multi-user scenarios. In: 2011 IEEE International Conference on Smart Grid Communication 2011, pp. 345–350 (2011) 18. Schwarzer, J., Engel, D.: Evaluation of data communication requirements for common demand response models. Proc. IEEE Int. Conf. Ind. Technol. (ICIT) 2015, 1311–1316 (2015) 19. Li, N., Chen, L., Low, S.H.: Optimal demand response based on utility maximization in power networks. IEEE Power Energy Society General Meeting (2011) 20. Seetharam, D., Bapat, T., Sengupta, N., Ghai, S.K., Shrinivasan, Y.B., Arya, V.: User-sensitive scheduling of home appliances, p. 43 (2011) 21. Adika, C.W.L.: Autonomous appliance scheduling for household energy management. IEEE Trans. Smart Grid 5(2), 673–682 (2014) 22. Barbato, A., Capone, A., Rodolfi, M., Tagliaferri, D.: Forecasting the usage of household appliances through power meter sensors for demand management in the smart grid. In: 2011 IEEE International Conference on Smart Grid Communication, pp. 404–409 (2011) 23. Bandini, S., Manzoni, S., Vizzari, G.: Agent based modeling and simulation: an informatics perspective. J. Artif. Soc. Soc. Simul. 12(4), 4 (2009) 24. Janssen, M., Ostrom, E.: Empirically based, agent-based models. Ecol. Soc. 11(2) (2006) 25. Balke, T., Gilbert, N.: How do agents make decisions ? A survey introduction : purpose & goals dimensions of comparison production rule systems, vol. 17, no. 2014, pp. 1–30 (2014) 26. Jager, W.: Modelling consumer behavior (2000) 27. Jager, W., Janssen, M.A., De Vries, H.J.M., De Greef, J., Vlek, C.A.J.: Behaviour in commons dilemmas: homo economicus and Homo psychologicus in an ecological-economic model. Ecol. Econ. 35(3), 357–379 (2000) 28. Jager, W., Janssen, M.: An updated conceptual framework for integrated modeling of human decision making: the Consumat II. ECCS 2012, 10 (2012) 29. Bravo, G., Vallino, E., Cerutti, A.K., Pairotti, M.B.: Alternative scenarios of green consumption in Italy: an empirically grounded model. Environ. Model. Softw. 47(256), 225–234 (2013)
Consumer Participation in Demand Response Programs …
327
30. Natalini, D., Bravo, G.: Encouraging sustainable transport choices in American households: results from an empirically grounded agent-based model. Sustain. 6(1), 50–69 (2014) 31. Müller, B., et al.: Describing human decisions in agent-based models—ODD+D, an extension of the ODD protocol. Environ. Model. Softw. 48, 37–48 (2013) 32. Schwarzer, J., Engel, D., Lehnhoff, S.: Conceptual design of an agent-based socio-technical demand response consumer model. In: Proceedings—IEEE 16th International Conference on Industrial Informatics, INDIN 2018 (2018) 33. Fredersdorf, F., Schwarzer, J., Engel, D.: Die Sicht der Endanwender im Smart Meter Datenschutz. Datenschutz und Datensicherheit - DuD 39(10), 682–686 (2015) 34. Janssen, M., Jager, W.: Lakeland 2 (2017). https://www.comses.net/codebases/5793/releases/ 1.0.0/%0A 35. Bergman, N., Haxeltine, A., Whitmarsh, L., Köhler, J., Schilperoord, M., Rotmans, J.: Modelling socio-technical transition patterns and pathways. J. Artif. Soc. Soc. Simul. 11(3), 7 (2008)
Using MBTI Agents to Simulate Human Behavior in a Work Context Luiz Fernando Braz and Jaime Simão Sichman
Abstract The use of agent-based simulations to study human behavior has provided a significant advance in a better understanding of the impact of the human factor in different contexts, including work. In this sense, instruments such as the Myers–Briggs Type Indicator (MBTI), which allows categorizing different individuals’ personality types following their characteristics and behavioral preferences, can provide a significant advance in these studies. In this work, we intend to explore MBTI to simulate different psychological types described in the theory, in order to evolve the comprehension of human factors in organizations. Keywords Multiagent based simulation · MABS · BDI agents · MBTI · Human behavior
1 Introduction The study of human behavior has been the topic of different research areas over the years. In particular, some studies proposed methods and models to identify similarity patterns in order to categorize behavioral preferences of the human being [1]. In this sense, Organizational and Work Psychology have been highlighted in studies of human relations with work, allowing the understanding of the influence of the human factor in organizations: this was done by studying problems such as performance, health at work, work-life quality, the impact of employment and working conditions on human life, among others [2]. The Myers–Briggs Type Indicator (MBTI) is an instrument developed in 1942 by Isabel Myers and her mother Katherine Briggs that seeks, based on Jungian psychology, to characterize individuals following certain preferences and behavioral charL. F. Braz (B) · J. S. Sichman Laboratório de Técnicas Inteligentes (LTI), Escola Politécnica (EP), Universidade de São Paulo (USP), São Paulo, Brazil e-mail: [email protected] J. S. Sichman e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_25
329
330
L. F. Braz and J. S. Sichman
acteristics. Being a dynamic method not associated with a single knowledge area, it has been used in different research fields, for example in Psychotherapy, education, and career counseling. Among them, its widespread use by organizations stands out being generally applied to different work situations such as improving communication, dealing with conflict, enhancing problem-solving and decision making, and many others [3]. In this work, we intend to explore MBTI to simulate different psychological types described in the theory; for this purpose, we developed different agent-based simulations, where different behaviors can be observed in a particular work context. The rest of the paper is structured as follows: Sect. 2 analyzes human behavior in a organizational environment, shows how MBTI can support the categorization of certain behavioral patterns and introduces a multi-attribute decision making (MADM) method to support the decision-making process [4]. Section 3 presents a model for representing behaviors in a work context and explores how MBTI and MADM could be used jointly in an agent-based simulation to improve decision making. In the sequence, Sect. 4 shows our preliminary experiments and the results obtained. Finally, Sect. 5 presents our conclusions and future work.
2 Human Behavior in a Work Context Psychology indicates that each individual is unique, having particularities, characteristics, and preferences that distinguish each one from other individuals. The emotions, perceptions, and motivations lead people to take particular paths, considering experiences lived throughout their lives thus building a complex network of relationships that make an individual probably undecipherable in its essence. Despite this, individuals also share many similarities, are generally excited when achieving a goal, and are sad when feeling the loss of a loved one, among other particularities [5]. Individuals also often have their lives linked to work and organizations. In order to achieve their own goals, they need to contribute so that others also can achieve their goals, thus integrating an environment of mutual interest [5], in which attitudes and behaviors can have a high impact on the achieved outcomes. In this sense, it is noted that organizational performance is intrinsically linked to the performance of its employees, who are influenced by factors associated with the environment, their skills and motivations, expressing their actions in the work environment [6]. Currently, business needs are also becoming increasingly complex, making companies look for more efficient ways to increase their competitive potential. Modern organizations have sought to improve their organizational processes based on good practices in order to be competitive [7]. In this sense, research results have concluded that work teams, made up of people with unique characteristics, behaviors, and particularities, are the primary unit of the performance impact in organizations [6] and knowing how to interpret and respect the needs of each one, accepting differences, and trying
Using MBTI Agents to Simulate Human Behavior in a Work Context
331
to ensure that these differences can be respected, is one of the most important factors for building high-performance teams [8].
2.1 MBTI and Its Application in a Work Context MBTI has been one of the most used instruments over the last decades to bring a better understanding of factors associated with the personality characteristics of individuals. Its wide use in several areas has allowed the constant evolution of the instrument that aims to categorize the differences in personality types of individuals identified through behavioral preferences [9]. These preferences, composing opposite poles, form dichotomies, which make it possible to analyze four different opposing domains that describe the way individuals perceive and react with the environment around them [3]. MBTI dichotomies The four dichotomies present in the model are the following: • Extraversion-Introversion (E-I): The first dichotomy seeks to describe how individuals who prefer Extraversion or Introversion tend to react to the environment. Individuals with a preference for extraversion like to interact with other people and take actions, directing their energies and attention to the outside world [9]. They are generally characterized by people who prefer to be sociable and expressive, like to communicate through speech and have the initiative in relationships and at work. Individuals with a preference for introversion are the opposite, they prefer to direct their energies internally and are often characterized by liking privacy, reflection, and communicating through other means [10]. • Sensing-Intuition (S-N): The second dichotomy, Sensing-Intuition explains how individuals seek to find information through their interactions with the environment. Sensing individuals prefer to interact with the world around them seeking to collect information that is real and tangible, that is, they tend to be practical, focus on what is real and current [9], and in other words, seek short-term gains. Intuition individuals tend to build a more abstract view of the world around them [10], focusing on less obvious paths and possibilities that can be reflected in long-term gains [3]. • Thinking-Feeling (T-F): In this dichotomy, it is explained two different and opposite ways in making judgments by individuals [3], that is, two opposite ways in making a decision [11]. Thinking Individuals seek to make decisions based on logical factors, in which they analyze the consequences of these choices or actions. They are generally characterized by having an analytical and rational profile [9], making impersonal decisions based on logical reasoning [3]. On the other hand, feeling individuals tend to also consider what is, or could be important for other individuals as well, analyzing possible impacts that their own decisions may have on other people’s goals [9]. • Judging-Perceiving (J-P): The last dichotomy seeks to analyze how individuals tend to deal with the outside world [3]. People with a preference for Judging like
332
L. F. Braz and J. S. Sichman
Table 1 All personality types from MBTI (adapted from [9]) Sensing (S) Thinking (T) Feeling (F) Extraverts (E) Judging (J) Perceiving (P) Introverts (I) Judging (J) Perceiving (P)
ESTJ ESTP ISTJ ISTP
ESFJ ESFP ISFJ ISFP
iNtuitive (N) Thinking (T)
Feeling (F)
ENTJ ENTP INTJ INTP
ENFJ ENFP INFJ INFP
to have a well-defined and under control plan, tend to be methodical and like to have things decided and without last-minute changes [9]. On the other hand, perceiving individuals tend to be more spontaneous, looking flexible paths and open to change, identifying opportunities that may arise, and trying not to control the world around them [10]. Personality types By using these dichotomies, individuals are then categorized into personality types derived from the combinations, thus identifying a preference for each dichotomy and thereby forming 16 distinct personality types [3], as shown in Table 1. As an example, an INTJ agent (third row, third column) represents an introverted, intuitive, thinking and judging agent.
2.2 MBTI and BDI Agents In [10, 12] it is developed a study that seeks to extend the MBTI to agent-based simulations. Through concepts derived from the personality types described in the MBTI, the authors propose a framework that allows representing characteristics, behaviors, and the decision-making process of individuals in computational agents. The proposed model is based on the extension of the BDI architecture (Belief-DesireIntention) that have been widely used for human modeling [13, 14] and social simulation to represent complex reasoning [15]. It is then possible to incorporate these particularities into the agents, thus defining behavioral characteristics that will be derived from the beliefs, desires, and intentions present in the framework. The context used by the authors is based on a scenario with simple tasks, in which agents distributed by the environment collect spreaded food items and have the task of stacking these items at predetermined locations. For each step, the agents’ behavior is based on the BDI architecture where the agents sense the environment, decide what to do, and then execute their decision. The proposed scenario, although simple, allows observing different paths taken by agents and different decisions made according to their personalities. In the model, the authors represent the particularities of each dichotomy following aspects described in the MBTI. The E-I dichotomy, for instance, is represented by the process of interaction of agents with other agents, in which extraverted agents
Using MBTI Agents to Simulate Human Behavior in a Work Context
333
gain energy when they are close to other agents and lose energy when they are alone. The opposite occurs with introverted agents, who lose energy when they are close to other agents and gain when they are alone. For the S-N dichotomy, aspects related to short and long-term gains are considered. Sensing agents focus on the short term, for example considering the shortest possible distance to its target, while intuitive agents focus on the big picture, sometimes looking for more distant targets that could represent long-term gains. The T-F dichotomy is represented as the component to evaluate the sensor input, that is, for example, a feeling agent will consider the locations of other agents and will try to predict what they might do, so it will consider the priorities of others and not only its own. On the other hand, thinking agents will ignore inputs from other agents. Finally, the model describes J-P dichotomy being used to assess an agents’ level of commitment to its current plan and intention. A judging agent will be highly committed to its intentions, however, a perceiving agent will be able to continually re-analyze its intentions, making it open to new perspectives and alternatives that may come in its way. In our work, we seek to apply these concepts, and adapt the framework to be used in a scenario that also considers organizational aspects, as presented in Sect. 3. It is essential to mention that this work does not aim to assess whether a personality type is better or worse for performing tasks and that it should not be used as a selection tool, which would even represent a misuse of MBTI [16]. The objective is to observe how artificial agents, modeled with some characteristics derived from the MBTI, behave within a simple scenario developed strictly for the experiments conducted, in which the interpretation of the results must be restricted only within the scope of the model developed in this work.
2.3 MADM Method We used a multi-attribute decision making (MADM) method to help the agents to evaluate a set of different alternatives, and rank the possible actions to be taken according to a prioritization criteria [4]. Basically, the method expresses these choices in a matrix where possible alternatives are compared along different attributes that affect their utility. The method defines two kinds of attributes and a normalization procedure: • Cost and benefits attributes: In a MADM method, attributes are generally classified into two categories: cost attributes and benefit attributes. For benefit attributes, the higher its value the higher its performance score; on the other hand, for cost attributes the lower the cost, the higher is its score. • Normalization procedure: MADM method can be used with several attributes, regardless of their numerical scale. In this case, procedures should be applied to normalize performance scores so that units of measure at different scales can be compared together. In this work, we used the Linear Scale Transformation-Max (LST-Max) technique, that provides a simple normalization procedure [4]. Four
334
L. F. Braz and J. S. Sichman
steps are provided in the method in order to define the ranking of the alternatives: 1. Define the preferred performance rating for each attribute; 2. Normalize performance ratings; 3. Define the overall ranking index for each alternative; and 4. Select the most acceptable alternative.
3 MBTI-based Agent Model By applying MADM, it is possible to obtain the ranking and selection of the best alternatives based on the preferences of decision-makers, considering multiple attributes that can influence the decision made. In this work, we seek to apply this methodology to observe human behavior in a work context, using agent-based simulations to observe the influence on decision making of different personality types described in MBTI and applied to agents modeled after these personality types.
3.1 Seller-Buyer Model In order to consider a context closer to an organizational environment, we have inspired on the scenario proposed by [10, 12], where agents are distributed on a grid and have to reach some cells to achieve their goals. However, instead of collecting food, we build a Seller-Buyer scenario, that has already been used in other agentbased simulations [17–19]. In our proposed scenario, sellers’ agents, modeled after the MBTI personality types, have the objective of finding buyers aiming at a future interaction that simulates a seller-buyer negotiation, and for that, they constantly move around the environment until all buyers have been visited. In each cycle, a sellers’ agent will wander around the environment until it perceives a buyer agent, when this occurs the decision-making process, described in the Sect. 3.4, will be initiated seeking to choose the best alternative (target buyer) among those present in its radius vision. On the other hand, buyers are distributed at fixed locations, with the aim of simulating companies waiting to be visited by sellers.
3.2 Decision Attributes Five attributes were defined to represent behavioral preferences in the agents, considering some characteristics associated with the dichotomies described in the MBTI. The attributes selection took into account work already done for modeling agents using MBTI [10, 12], and it sought to adapt the approach proposed in these studies to the our Seller-Buyer scenario. In the rest of this section, i represents a particular buyer.
Using MBTI Agents to Simulate Human Behavior in a Work Context
335
Distance to the buyer (A1) The first attribute represents the Euclidean distance between a sellers’ and a buyers’ agent. This attribute is considered a cost attribute normalized as Ai1 nor m = min(A1 )/a i1 , since the shorter the distance, the fewer movement cycles a seller will take to reach its destination affecting the time necessary to complete the task. Sellers also have a limited perception of the environment. They have a perception radius that limits how much they can see of the environment around them. This limiter will influence all the dichotomies represented in the other attributes since it prevents, for example, an agent from noticing other agents (sellers and buyers) outside their vision limit. Buyer size (A2) The next attribute represents the number of people working on the buyer that the Seller will interact with. Comparing with the real world, it could be translated as the number of people a salesperson could interact in a meeting, event, lecture, etc. This attribute will affect the representation of the ExtraversionIntroversion dichotomy and represents if the agent has a characteristic to be sociable (extraverted) or not (introverted). For extraverted agents, the Buyer Size will be normalized as a benefit attribute (more people better), i.e., Ai2 nor m = a i2 /max(A2 ), while for introverted agents as a cost attribute (preference in interacting with fewer people), i.e., Ai2 nor m = min(A2 )/a i2 . Cluster density (A3) The third attribute represents the number of other buyers located close to the target buyer, and allows to abstract the belief of possible future gains. One can assume, for example, that it may be preferable for a seller to choose to travel a greater distance to find a buyer who is located close to several other buyers, thus allowing its earnings to be maximized in the long term. This attribute has a great influence on the Sensing-Intuition dichotomy, as therefore it is considered as a benefit attribute normalized as Ai3 nor m = a i3 /max(A3 ). Sellers close to the target-buyer (A4) This attribute is related to the ThinkingFeeling dichotomy and seeks to consider how the seller agent deals with the fact that other sellers are close to the same target-buyer. Feeling Agents seek to make decisions considering also what may be important for other agents around, thus taking into account the possibility that other sellers agents may want to visit the same buyer. On the other hand, thinking agents tend to be more rational, and will give less relevance to the goals of others. This is a cost attribute normalized as Ai4 nor m = min(A4 )/a i4 given the fact that the fewer other agents close to the seller agents’ target will be better. Probability to recalculate the plan (A5) The last attribute is related to how much the seller agent is committed to maintaining the plan to reach a certain target. It is relevant to simulate the Judging-Perceiving dichotomy, in which perceiving agents can constantly reconsider the decision made while being open to new options and alternatives; on the other hand judging agents will be highly committed to keeping the decision made until the objective is reached or the infeasibility of achieving it. This is the only dichotomy that is not considered in the MADM matrix, and its logic will be implemented as a probability in each cycle of execution of the agent.
336
L. F. Braz and J. S. Sichman
3.3 Ranking the Alternatives With the attributes defined and normalized, it is possible to rank the best alternatives (buyer to visit) given the sellers’ behavioral preferences. For this, each of the normalized values (Aij nor m ) must be represented in a MADM matrix to calculate the scores for each dichotomy. For Extraversion-Introversion dichotomy score (E-I) it is used the sum of Ai1 norm and Ai2 norm for each i value resulting in S E-I i = Ai1 norm + Ai2 norm . On the other hand, to calculate Sensing-Intuition (S-N) dichotomy it is necessary to combine a Density Weight to the normalized values: this must be done because for both personality types (S or N) the same benefit attribute must be applied, but for intuition sellers the possibilities for long-term gains will be stronger than for sensing sellers: Densit yW eight =
0.8, if Pr e f er ence = I ntuition 0.2, otherwise
(1)
The distance measure (used to calculate Ai1 norm ) will compose the remaining part of the weight calculation DistanceW eight S-N = 1 − Densit yW eight, so in this way, we can represent the opposite poles for sensing and intuition agents. Finally, it is possible to calculate Sensing-Intuition dichotomy score using the equation S S-N i = (Ai1 norm ∗ DistanceW eight S-N ) + (Ai3 norm ∗ Densit yW eight). In order to calculate the last dichotomy Thinking-Feeling (T-F), a weight will be applied in order to differentiate the behavior of feeling sellers, who will consider more important the fact of having teammates close to possible target buyers, from thinking sellers who will consider their own distance a factor of greater relevance: Seller sCloseT oBuyer W eight =
0.8, if Pr e f er ence = Feeling 0.2, otherwise
(2)
Similar to S-N, the distance measure will also compose the remaining part of the weight calculation following the equation DistanceW eight T-F = 1 − Seller sCloseT oBuyer W eight so the opposites poles can be represented. To calculate Thinking-Feeling dichotomy score, the following equation will be applied S T-F i = (Ai1 norm ∗ DistanceW eight T-F ) + (Ai4 norm ∗ Seller sCloseT oBuyer W eight). With the i scores from the three dichotomies it is possible to calculate the i final scores. To ensure equivalence with the MADM method, a weight for each dichotomy is used; in our scenario, since it is not clear whether a particular dichotomy has a greater impact on the decision, we opted to used a same value 1/3 for all of them: S Final i = (S E-I i ∗ W eight E-I ) + (S S-N i ∗ W eight S-N ) + (S T-F i ∗ W eight T-F )
(3)
Using MBTI Agents to Simulate Human Behavior in a Work Context
337
According to these steps, the SFinal i will represent a ranking of alternatives based on the preferred performance ratings where the best alternative (the best Buyer to visit) will be the one with the highest value of SFinal i [4]: S Best = max(S Final i )
(4)
3.4 Basic Algorithm Algorithm 1 shows how the simulation is performed, i.e., how sellers with different personality types perceive the environment and decide which buyer to approach. In particular, attributes A1–A4 are respectively calculated in lines 5, 6, 7, and 8, the dichotomies scores and final score are computed in lines 10 and 11, and A5 is taken into account in line 15, when the agent decides to maintain or not the current target buyer. Algorithm 1 Sellers’ decision process Pseudo-algorithm 1: while Not visited buyers >0 do 2: if TargetBuyer not defined then 3: See the buyers in my perception radius 4: for all buyers do 5: Calculate the distance to the buyer {A1 attribute} 6: Get the buyer size {A2 attribute} 7: Calculate cluster density {A3 attribute} 8: Check if there are other sellers close to buyers {A4 attribute} 9: end for 10: Calculate Scores 11: T arget Buyer ← Buyer with max(scor e) 12: Go towards TargetBuyer 13: else 14: if My personality type is Perceiving then 15: T arget Buyer ← Recalculate TargetBuyer {A5 attribute} 16: end if 17: Go towards TargetBuyer 18: end if 19: end while
4 Implementation and Preliminary Experiments We used the Gama platform (GIS & Agent-based Modeling Architecture), an opensource platform that allows the representation of complex agent-based models with generic multi-level capabilities [20]. To simplify the model development, the archi-
338
L. F. Braz and J. S. Sichman
Fig. 1 Sellers’ performance by personality type
tecture simple_bdi [21] was used. The architecture allows the implementation of perception functions (for example, sensitizing a seller when it sees a buyer within its perception radius), as well as the structure of beliefs, plans, and intentions described in the BDI model. The simulation scenario, represented as a two-axis plane (x, y) in a 160 × 160 grid, allows the observation of the behavior of sellers agents in order to complete their objectives, which is to visit as many buyers as possible within the maximum number of cycles defined,1 considering that each buyer can be visited only one time by simulation. For modeling sellers’ agents some functions were developed to apply the selection attributes associated with each of the dichotomies following the described MADM method. In order to measure the sellers agents’ performance, we used the number of visited buyers as the main metric to analyze. In each experiment, we considered 16 seller agents, each one representing one distinct personality type from MBTI. For each simulation, different random seeds were used to define distinct initial locations for sellers and buyers. The Buyer Size in each simulation was also randomly defined, varying in a range from 0 to 30. For the perception radius, a fixed value of 50 was defined, and for the clustering function, the Gama package simple_clustering_by_distance was used with the maximum distance value of 30. The number of buyers for this first scenario was defined as 1280, that is, an occupation rate of 5% given the grid size 160 ∗ 160 = 25600. All these values were defined empirically, and optimizations and comparative analyzes should be made in more detail in further works. Three experiments, considering 10, 50 and 500 simulations respectively were performed and to analyze the obtained results the data were grouped for each experiment, thus making it possible to extract aggregate metrics for analysis of the median performance within a given experiment. Figure 1 then shows the performance for each of the experiments considering several simulations performed, in which the median of buyers visited by each seller modeled with a distinct personality type can be analyzed. It can be seen that with few simulations carried out (left in the figure), the median among the group varies a lot according to the personality types. As an example we see 1
In our experiments, we have adopted 2500 cycles as the limit for each simulation.
Using MBTI Agents to Simulate Human Behavior in a Work Context
339
Table 2 Sellers’ performance—Experiment 3 Personality type
Min
Max
Median Mean
Std
Personality type
Min
ENFJ
73
ENFP
79
ENTJ
70
ENTP
78
ESFJ
Max
Median Mean
147
101
102.9
14.1
INFJ
162
102
104.2
14.9
INFP
140
100.5
100.8
12.7
165
100
100.4
14.2
37
127
92
92.7
ESFP
49
138
94
93.5
ESTJ
73
142
106.5
ESTP
79
143
103
Std
72
130
97
99.7
12.5
75
144
101
102.8
14.6
INTJ
63
156
101
101.8
16.8
INTP
65
135
99.5
99.5
14.4
15.9
ISFJ
59
121
89
88.3
12.8
16
ISFP
56
135
91
90.5
14.3
106.8
14.1
ISTJ
78
142
104
105.6
13.7
104.5
13.3
ISTP
70
138
104.5
104.5
13.7
that sellers with the ENTP and ESTP personality types reached the highest performance reaching 98 buyers in the median considering the 10 simulations performed in this first experiment. On the other hand, the ISFP personality type obtained the worst result with median of 60 visited buyers. In the chart, it is also possible to analyze the 95% confidence interval represented as a narrow black bar that demonstrates a wide variation between personality types in this first experiment. As we can see in Table 2, analyzing the Experiment 3 with 500 simulations, the difference between the median performance of each personality type is reduced, that is, with more simulations performed, considering the same scenario but with positions of buyers and sellers randomly distributed differently in each simulation, a greater approximation between the performances is achieved, enabling different personality types to perform similarly.
5 Conclusions The preliminary experiments demonstrate that even with different scenarios and methods from those carried out in other studies [10, 12], the agents’ performance presents similarities in terms of their variation considering distinct personality types, that is, agents with different personalities show significant variation in performance between each other; however, this behavior is observed only when few random simulations are analyzed. When analyzing several simulation rounds, considering random positions of both sellers and buyers, a greater approximation of performances is observed over time. Probably, this is an indication that different personality types may not have a significant long-term impact on overall performance; however, other studies need to be carried out in this sense. With the results, it is noted, for example, that regardless of the personality type when we consider multiple simulations, the leadership of the performance ranking is alternated, that is, the same agent who was the leader in a given simulation may be the last placed in another simulation, given that a random seed is used to randomly
340
L. F. Braz and J. S. Sichman
generate the initial positions of sellers and buyers. It indicates that the agents’ initial position ends up being a more significant factor than its personality type and opens perspectives for new studies that allow identifying other behavioral factors or aspects that can influence performance more decisively. Our results demonstrate that having a personality type does not mean that a particular agent will perform the task better within the scenario we developed. Despite not being the objective of the work, the results seem to be in line with other research that mentions that MBTI should not be used to unjustly stereotype individuals [16], which, as said earlier, would even represent a misuse of the instrument. In this way, it is sought through this study to better understand the influence of human behavior in a work context, thus providing that new studies can be carried out to explore other scenarios and situations that further advance this understanding. As future work, improvements in the current study can be explored, such as the analysis of the impact that the variation of initial parameters can have on the final results obtained (ex: perception radius, buyer size, among others). Studies considering work teams may also be conducted to analyze how collaborative factors impact the team’s behavior and the consequent performance. Finally, the development of negotiation mechanisms between sellers and buyers would help to observe more complex aspects inherent to activities in an organizational and work context, thus providing a better understanding of different factors that may influence the behavior of agents.
References 1. Braz, L.F., Sichman, J.S.: Workshop - Escola de Sistemas Agentes, seus Ambientes e aplicações de Agentes (2020). https://doi.org/10.5281/zenodo.4037413 2. Zanelli, J.C., Borges-Andrade, J.E., Bastos, A.B.: Psicologia, Organizações e Trabalho no Brasil. AMGH Editora (2014) 3. Myers, I.B., McCaulley, M.H., Quenk, N.L., Hammer, A.L.: MBTI Manual: A Guide to the Development and use of the Myers-Briggs Type Indicator. Consulting Psychologists Press, Palo Alto (1998) 4. Stanujkic, D., Magdalinovic, N., Jovanovic, R.: A multi-attribute decision making model based on distance from decision maker’s preferences. Informatica 24(1), 103–118 (2013) 5. Newstrom, J.W.: Organizational Behavior: Human Behavior at Work. McGraw-Hill Irwin, New York (2007) 6. Castka, P, Bamber, C.J., Sharp, J.M., Belohoubek, P.: Factors affecting successful implementation of high performance teams. Team Perform. Manag.: Int. J. (MCB UP Ltd.) (2001) 7. Sharp, J.M., Hides, M.T., Bamber C.J.: Continuous organisational learning through the development of high performance teams. In: ICSTM (2000) 8. Williams, K.: Developing High Performance Teams. CMI Open Learning Programme. Elsevier, Oxford (2004) 9. Myers, I.B.: Introduction to Type: A Guide to Understanding Your Results on the Myers-Briggs Type Indicator. Mountain View (1998) 10. Salvit, J.: Extending BDI with agent personality type. The City University of New York, New York (2012) 11. Briggs-Myers, I.; Myers, P.B.: Gifts differing: understanding personality type (1995)
Using MBTI Agents to Simulate Human Behavior in a Work Context
341
12. Salvit, J., Sklar, E.: Modulating agent behavior using human personality type. In: Proceedings of the Workshop on Human-Agent Interaction Design and Models (HAIDM) at Autonomous Agents and MultiAgent Systems (AAMAS), pp. 145–160 (2012) 13. Norling, E., Sonenberg, L.: Creating interactive characters with BDI agents. In: Proceedings of the Australian Workshop on Interactive Entertainment (IE’04), pp. 69–76 (2004) 14. Pereira, D., Oliveira, E., Moreira, N., Sarmento, L.: Towards an architecture for emotional BDI agents. In: 2005 Portuguese Conference on Artificial Intelligence, pp. 40–46. IEEE (2005) 15. Taillandier, P., Bourgais, M., Caillou, P., Adam, C., Gaudou, B.: A BDI agent architecture for the GAMA modeling and simulation platform. In: International Workshop on Multi-Agent Systems and Agent-Based Simulation, pp. 3–23. Springer, Cham (2016) 16. Coe, C.K.: The MBTI: potential uses and misuses in personnel administration. Public Pers. Manag. 21(4), 511–522 (1992) 17. DeLoach, S.A.: Modeling organizational rules in the multi-agent systems engineering methodology. In: Conference of the Canadian Society for Computational Studies of Intelligence, pp. 1–15. Springer, Berlin (2002) 18. Tran, T., Cohen, R.: A learning algorithm for buying and selling agents in electronic marketplaces. In: Conference of the Canadian Society for Computational Studies of Intelligence, pp. 31–43. Springer, Berlin (2002) 19. Xu, H., Shatz, S.M.: An agent-based Petri net model with application to seller/buyer design in electronic commerce. In: Proceedings 5th International Symposium on Autonomous Decentralized Systems, pp. 11–18. IEEE (2001) 20. Drogoul, A., Amouroux, E., Caillou, P., Gaudou, B., Grignard, A., Marilleau, N., Taillandier, P., Vavasseur, M., Vo, D.A., Zucker, J. D.: Gama: a spatially explicit, multi-level, agent-based modeling and simulation platform. In: International Conference on Practical Applications of Agents and Multi-Agent Systems pp. 271–274. Springer, Berlin (2013) 21. Nijkamp, P., van Delft, A.: A BDI agent architecture for the GAMA modeling and simulation platform. In: International Workshop on Multi-Agent Systems and Agent-Based Simulation, pp. 3–23. Springer, Cham (2016)
Exposure to Non-exhaust Emission in Central Seoul Using an Agent-based Framework Hyesop Shin
and Mike Bithell
Abstract Non-exhaust emission (NEE) from brake and tyre wear cause deleterious effects on human health, but relationship with mobility has not been thoroughly examined. We construct an in silico agent-based traffic simulator for Central Seoul to illustrate the coupled problems of emissions, behaviour, and the estimated exposure to PM10 (particles less than 10 microns in size) for groups of drivers and subway commuters. The results show that significant extra particulates relative to the background exist along roadways where NEEs contributed some 40% of the roadside PM10 . In terms of health risk, 88% of resident drivers had an acute health effect in late March but that kind of emergence rarely happened. By contrast, subway commuters’ health risk peaked at a maximum of 30% with frequent oscillations whenever the air pollution episodes occurred. A 90% vehicle restriction scenario reduced PM10 by 18–24%, and reduced the resident driver’s risk by a factor of 2, but not effective for subway commuters as the group generally walked through background areas rather than along major roadways. Using an agent-based traffic simulator in a health context can give insights into how exposure and health outcomes can depend on the time of exposure and the mode of transport. Keywords Agent-based traffic simulation · Non-exhaust emission · Exposure and health loss · NetLogo
1 Introduction Traffic-related air pollution (TRAP) has long been associated with adverse health outcomes. The book Non-exhaust emissions (NEE): an urban air quality problem for public health compiled recent scientific findings that addressed non-exhaust particles which are formed of metallic, rubber, carbon black, and other organic subH. Shin (B) School of Geographical and Earth Sciences, University of Glasgow, Glasgow G12 8QQ, UK e-mail: [email protected] M. Bithell Department of Geography, University of Cambridge, Cambridge CB2 3EN, UK c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_26
343
344
H. Shin and M. Bithell
stances by combustion, wear, road abrasion, and particle resuspension [2]. All of these substances are as equally catastrophic as exhaust particles. NEE can be affected by traffic queues, driving behaviour, and weather. In traffic congestion, the ‘stop-and-go’ patterns of the traffic generate more wear on brake pads and discs that adds to surges of ambient particulates during rush hours [1, 4]. Brake wear emissions are also spatially heterogeneous because the vehicles would be expected to slow down when reaching a junction or going downhill [1, 17]. In addition, harsh braking and acceleration can generate more particles from tyres, brake discs, and linings. The Transport for London expert group pointed out a strong relationship between aggressive driving behaviour and more dispersion of particulate emission to the sidewalks that can possibly cause adverse health consequence. Although the regulation for NEE has not yet been made, the UK’s Air Quality Expert Group [1] and the European Environmental Agency [6] have both reported the severity of NEE to human health and are calling for more evidence. To link the challenges between NEE and the mobility of vehicles and humans, agent-based modelling (ABM) is one of the key methods that can simulate urban traffic and air quality on an individual level [19]. ABM not only can simulate the movement of heterogeneous vehicles and individuals but also measures the exposure level based on the path on which the agent is situated and the estimated local pollution value [5, 8, 21]. A promising example is [8]’s integrated model, where it integrated NOx emission, dispersion, activity patterns of population, vehicle movement, and the exposure to the ambient NOx based on the time spent in each locations—has not been attempted previously. Other agent-based traffic models also have simulated vehicle emissions caused by urban car traffic using general programming language [9], SUMO [3, 11], or MATSim [10]. This paper examines the exposure and possible health effects of NEE on commuter’s health based on a traffic simulation. The specific questions are as follows: – What is the difference in health effects between walking commuters and vehicle commuters? – How did air quality improve as a result of the simulation of policy scenarios, and what were the characteristics of any improvements? Given the limited resources available to mimic the agents’ attributes and their behavioural patterns, we built an in silico agent-based traffic model.
2 Methods 2.1 Overview Figure 1 illustrates the overall procedure of this study. This study retrieved hourly pollution, Seoul population, origin-destination by sub-district level, and traffic
Exposure to Non-exhaust Emission in Central Seoul . . .
345
observation from the census and Seoul Institute. The remainder of this section describes a summarised ODD protocol, sensitivity analysis, and calibration.
2.2 A Summarised ODD Protocol A complete, detailed model description, following the ODD (Overview, Design concepts, Details) protocol [7] is provided in the supplementary material [15]. The purpose of this model is to understand commuter’s exposure to non-exhaust PM10 emissions, and to make a preliminary estimate of their health effects. We use the following patterns of the ‘at-risk’ population by transport modes, traffic volume by road, and pollution levels by road in a context that is representative of realistic conditions in the Seoul CBD. The model includes the following entities of three mobile agents: (1) resident cars with drivers, (2) non-resident cars,1 and (3) subway commuters; and two fixed agents: (1) traffic signals, and (2) entry points where the vehicles are fed into the study area. The state variables and attributes characterising these entities are listed in the repository [15]. The spatial and temporal resolution of the study area, is 30m×30m and 1 min respectively. Our study area is the CBD of Seoul (16.7 km2 ), which in the NetLogo model consists of 155 horizontal and 192 vertical patches. The model is implemented for 3 months between January and March 2018 (approx. 130,000 ticks).
Fig. 1 A flowchart of the methodological procedure of the model (left), the agent types, and the vision of software (right) 1
We clarify that non-resident vehicles are those for which the origin is outside the model domain. For example delivery vehicles—routing these non-randomly would require knowing both intermediate and final destination data. For the present, we treat these as random as this is better than omitting them completely, but the model might be improved with knowledge of where they were headed.
346
H. Shin and M. Bithell
The most important processes of the model, which are repeated every time step, are the update of particulates on roads and background areas, the journey of vehicles and pedestrians, and the exposure and health loss in response to mobility patterns and non-exhaust PM10 emissions. The agents are assumed to have a healthy medical profile at the beginning of the simulation, but are expected to have their health decreased when they are exposed to over 100 µg/m3 of PM10 . Subway commuting agents are assumed to be exposed to the ambient level PM10 between early morning and late in the evening even if they do not appear on the interface. If the health of an agent drops to a third of the initial state, the model will recognise the individual to be ‘at-risk’. The cumulative updates of the ‘at-risk’ population and the PM10 concentration by roads are exported to a spreadsheet at the end of the simulation. The most important concept of the model is emergence and stochasticity. The emergence of the ‘at-risk’ population (i.e. those with health under a third of the initial health status) occurs from a balance between exposure to a PM10 threshold of 100 µg/m3 and recovery. This threshold refers to the hourly standard controlled by the S.Korean government. Stochasticity is another crucial component because (1) the number of non-resident vehicles within the study area can cause traffic congestions at any junction given that the vehicles move randomly, and (2) the infiltration ratio (indoor-to-outdoor concentration ratio) varies by the microenvironment and the time spent. This study estimates the infiltration from the ambient PM10 of the current patch to indoor spaces such as houses between 0.2–0.7 [12, 13], workplaces at 0.2 [12], and transits—i.e. subway and in-vehicles at 0.7 [12]. The model is initialised with a 1% sample of 69,806 resident vehicles with unleaded, diesel, and LPG fuel tanks, and 1% sample of 193,200 subway commuters. The resident drivers who commuted beyond our study area were removed, which resulted in 399 driving agents and 1,932 subway agents. During weekdays, trips are made along the shortest path and will not change throughout the simulation, except weekends where the agents have freedom to choose their trips and return before the weekday starts. Since the aggregated OD information cannot provide the exact location of agents, we allow some randomness to allocate the origin and destinations for each simulation run. Key processes in the model are the pathfinding algorithms, and the generation and dispersion of NEEs. For vehicle pathfinding (see Fig. 2), we initially retrieved road networks and removed minor streets, then employed an A* algorithm for each vehicle to find the shortest path while avoiding dead ends and obstacles [22]. As explained above, LSA is used for pedestrians to navigate the shortest distance from origin to goal, which in this case is a straight line. The LSA algorithm was compared with a different set of code that asked the agents to bypass buildings. However, the alternative code set took much longer execution time while the exposure levels hardly showed any difference. To generate NEE from each vehicle, we transform the EEA’s trip-based NEE equation (g/km) to a unit-based emission (patch-by-patch) in order to mimic the to match the spatial resolution distribution of emission sources [6]. We use (patch) of the model environment.
Exposure to Non-exhaust Emission in Central Seoul . . .
347
Fig. 2 a is a test of an agent finding the shortest path from the origin (red patch) to its destination (light green patch) based on A*, and b is the application of A* for a sample vehicle used in study site
Fig. 3 Illustrations of dispersion parameters (left) and dilution parameters (right)
3 Sensitivity Analysis This study uses one-factor-at-a-time (OFAT) method to examine the sensitivity for each of the parameters. Since computational limitations are made more, it is normal that full-factorial parameterisation should be considered. However, we did not find any noticeable interaction effects in the outcome after testing the combinations between emission, dispersion, and dilution over a selected period. Each parameter is analysed from an average of 20 iterations to reduce possible stochastic effects.
3.1 Dispersion and Dilution We parameterise the angle of dispersion from each vehicle to understand whether wider impact of non-exhaust PM10 to the neighbouring patches affects more risk to people; and the adjust the ‘time until this dispersion dilutes’ (see Fig. 3). The baseline parameters enable vehicles to disperse (1) in a cone behind the vehicle with an angle of 60◦ and (2) dilute within 0–3 min at which the particles effectively settles out. 60◦ approximately accounts for 5–7 patches of PM10 . By controlling the dilution parameter by 0) and a second one from EQ > 0 to an oscillating mobilization (OSC). If repression is low/absent (r = 0) both changes occur at a lower level of frustration F than if repression r is high (r = 1). The exploration of the analogous cases r = 0.25 und r = 0.75 in Fig. 6 reveals that the model dynamics for r = 0 and r = 1 also hold for these neighboring values. Thus there is no structural break between r = 0 and r = 0.25 on the one hand and r = 1 and r = 0.75 on the other. However, a further exploration of analogous cases in Fig. 7 displays for the repression r = 0.50 a mix of the patterns observed for r = 0.25 and r = 0.75: if frustration is rather low (F ≤ 0.50) there is a similarity of the model behavior between r = 0.50 and r = 0.25. However, if frustration is higher (F > 0.50) the similarity of the model outcome is rather between r = 0.50 and r = 0.75. The sample presented in the parameter map of Fig. 7 is probably theoretically saturated: for each type of the initially postulated model behavior, i.e. EQ = 0, EQ > 0, and OSC, there exists at least one related parameter configuration. Moreover, an increase of the granularity of the map is unlikely to lead to new theoretical insights. Thus Fig. 7 could be used for empirical model validation, which for reasons of space is not shown in detail: for each virtual case of Fig. 7, we should try to find in the immediate vicinity a real case. If this is not possible, we may compare existing real 4
For illustrative purposes the author uses simulation for exploring the behavior of the model, although in the present case analytical solutions are available.
Theoretical Sampling and Qualitative Empirical Model Validation
365
data with interpolations between virtual cases, which display in Fig. 7 the same model behavior.5 On the grounds of Fig. 7 we expect for both types of real cases the following mobilization patterns: (a) (b) (c) (d) (e)
for very high frustration (F = 1) (and any level of repression r) oscillating mobilization OSC; for very low frustration (F = 0) (and any level of repression r) asymptotic zero-mobilization EQ = 0; for frustration above the average (F > 0.5) and repression below the average (r < 0.5) oscillating mobilization OSC; for frustration below the average (F < 0.5) and repression above the average (r > 0.5) asymptotic zero-mobilization EQ = 0; for other situations, not covered by the preceding rules (a)–(d), asymptotic positive mobilization EQ > 0.
The outlined model validation requires country data about F, r, c, and the dynamics of mobilization M. Due to the “fuzziness” of the previous rules (a)–(e), the required data may however be rather qualitative and coarse than quantitative and precise. In a field of research with a lot of “messy” data this is one of the advantages of the presented methodology. For the dynamics of mobilization M we propose to consider protest-intensity, i.e. the number of protest-events per unit of time, as e.g. described by Tylor and Jodice [16]. For the operationalization of the repression r and the contagiousness c the Freedom House indicators “civil liberties” and “freedom on the net” [17] may be useful. Finally, the operationalization of the frustration F could be based on inverse life satisfaction, as e.g. published by the OECD [18].
4 Summary This paper is an attempt to validate quantitative simulation models with methods that were originally developed for qualitative social research. Ideally such models should be tested by comparing simulation outcomes with data from a representative random sample with a big N of cases. However, de facto N is often small or even equal to 1 and the related sample rather convenience than random. Thus, quantitative testing of simulation models is often based on case studies and consequently less systematic and less valid than it pretends to be. As a possible alternative the author proposes to use methods from grounded theory research. Hence, the proposed sampling of cases is not a random procedure but rather a purposive, parsimonious, and theoretically well justified process, intended to find the most interesting cases for testing a theory and the related model. Contrary to the mentioned quantitative analyses, this search for crucial cases is at a first stage not
5
If there are doubts about the validity of the interpolated model behavior, it may be recalculated for the observed real case.
366
G. P. Mueller
based on observational data but on the parameter-related output of the analyzed simulation model. Only in a subsequent second step a sample of real data is compiled, whose explanatory parameter values should be as close as possible to the theoretical sample. If the correspondence between these observational data and the model simulations of the dependent variable is sufficiently high, the model has a higher validity than by the use of a traditional quantitative test with only a small N: the imprecision of the match between the theoretical sample and the observational data is compensated by the strategic choice of critical cases with a real potentiality to falsify the tested model.
References 1. Lorscheid, I., et al.: Opening the “black box” of simulations: increased transparency and effective communication through the systematic design of experiments. Comput. Math. Organ. Theory 18, 22–62 (2012) 2. Law, A.M.: Simulation Modeling and Analysis, chap. 12, 4th edn. McGraw-Hill, New York (2007) 3. Glaser, B., Strauss, A.: The Discovery of Grounded Theory: Strategies for Qualitative Research, chap. 3. Aldine Publishing, Chicago (1975) 4. Strauss, A., Corbin, J.: Basics of Qualitative Research, chap. 13, 2nd edn. Sage Publications, Thousand Oaks (1998) 5. Neumann, M.: Grounded simulation. JASSS 18(1), 9 (2015) 6. Arnold, E.: Validation of computer simulations from a Kuhnian perspective. In: Beisbart, C., Saam, N.J. (Eds.) Computer Simulation Validation, chap. 8. Springer, Cham (2019) 7. Troitzsch, K.: Using empirical data for designing, calibrating and validating simulation models. In: Jager, W., et al. (Eds.) Advances in Social Simulation 2015, pp. 413–427. Springer, Cham (2017) 8. Waldherr, A., Wijermans, N.: Communicating social simulation models to sceptical minds. JASSS 16(4), 13 (2013) 9. Everitt, B.S.: The Cambridge Dictionary of Statistics, 3rd edn, pp. 119–120. Cambridge University Press, Cambridge (2006) 10. Robinson, A.P.: Testing simulation models using frequentist statistics. In: Beisbart, C., Saam, N.J. (Eds.) Computer Simulation Validation, chap. 19. Springer, Cham (2019) 11. Murray-Smith, D.J.: Verification and validation principles from a systems perspective. In: Beisbart, C., Saam, N.J. (Eds.) Computer Simulation Validation, chap. 4. Springer, Cham (2019) 12. Patton, M.Q.: Qualitative Research and Evaluation Methods, pp. 270, 309, 4th edn. Sage, Los Angeles (2015) 13. Mueller, G.P.: Getting order out of chaos: a mathematical model of political conflict. Russ. Sociol. Rev. 16(4), 37–52 (2017) 14. Dixon, R.: The logistic family of discrete dynamic models. In: Creedy, J., Vance, M. (Eds.) Chaos and Non-linear Models in Economics, chap. 4. Edward Elgar, Aldershot (1994) 15. Creedy, J., Martin, V.: The strange attraction of chaos in economics. In: Creedy, J., Vance, M. (Eds.) Chaos and Non-linear Models in Economics, chap. 2. Edward Elgar, Aldershot (1994) 16. Taylor, Ch., Jodice, D.: World Handbook of Political and Social Indicators, vol. 2, tab. 2.1, 3rd edn. Yale University Press, New Haven (1983) 17. Freedom House: Freedom in the World. Washington DC (2020). https://freedomhouse.org/rep orts 18. OECD: Better Life Index, edition 2017. Paris, OECD Publications (2017). https://stats.oecd. org/index.Aspx?DataSetCode=BLI2017 (col.“life satisfacttion”)
The Large-Scale, Systematic and Iterated Comparison of Agent-Based Policy Models Mike Bithell, Edmund Chattoe-Brown , and Bruce Edmonds
Abstract Vital to the increased rigour (and hence reliability) of Agent-based modelling are various kinds of model comparison. The reproduction of simulations is an essential check that models are as they are described. Here we argue that we need to go further and carry out large-scale, systematic and persistent model comparison—where different models of the same phenomena are compared against standardised data sets and each other. Lessons for this programme can be gained from the Model Intercomparison Projects (MIP) in the Climate Community and elsewhere. The benefits, lessons and particular difficulties of implementing a similar project in social simulation are discussed, before sketching what such a project might look like. It is time we got our act together! Keywords Model inter-comparison · Policy modelling · Replication · Reproduction · IPCC · ABM · Verification · COVID-19
1 Model Comparison, Alignment, Replication and Reproduction The replicability of real-world experiments has long been a cornerstone of science. However, the recent “replication crisis” in psychology (for an account see [1]) has brought this issue to the fore. In that context, replication means that if you followed the whole reported process of an experiment—selecting subjects, doing the experiment, analysing the results, etc.—you would come to the same conclusions as the original. A M. Bithell (B) Department of Geography, University of Cambridge, Cambridge, England e-mail: [email protected] E. Chattoe-Brown School of Media, Communication and Sociology, University of Leicester, Leicester, England e-mail: [email protected] B. Edmonds Centre for Policy Modelling, Manchester Metropolitan University, Manchester, England e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_28
367
368
M. Bithell et al.
set of overlapping ideas has been imported into the world of Agent-Based Modelling (ABM) [2–4], but the introduction of an artefact (the simulation model) adds a new stage to the sequence linking ideas to evidence (which implies some new distinctions). In computer science there are two sides to checking any code: (a) the verification of the code—that the code complies with its specification and (b) the validation of the code—that the code achieves its goals when implemented and used in practice. In ABM, the code is the computer simulation, so these translate as two processes: (a) checking the code complies to its description including whether there are any hidden assumptions, bugs, etc. and (b) checking that the model relates to observed data in a way that supports the conclusions drawn from the modelling exercise. Since the replication crisis in psychology, there has been more focus on these kinds of processes, resulting in new terminology. These days we distinguish between reproduction which means re-coding the ABM from its description and checking one gets essentially the same results (to check (a)) and replication which also checks the validation (b). Thus, whilst [2] talks about ‘aligning’ simulation models and [3] talks about ‘replicating’ simulation models, under modern terminology these would both be ‘reproducing’ simulations. Axtell et al. [2] is the first reported case of model reproduction we know of, showing that reproducing even simple models uncovers assumptions and differences. Edmonds and Hales [3] independently reproduced a simple published model [5] into two, very different, programming languages and then checked these two reproductions against each other (as well as the reported results) and showed that the original authors had a flawed understanding of their own model, changing its interpretation. Hales [6] argues for a system of “replication-first” publication, whereby the model reported on is independently reproduced before it is published in order to ensure the completeness and reliability of reports on ABM research. Chattoe-Brown et al. [4] argues that the reproduction of models is an essential check before any policies should be based on their results. One important reason for these terminological challenges, which suggests that further analysis will be needed, is that models (and science generally) differ in the extent to which publication can serve as an effective summary of the actual research process. A publication, like a model, is to some extent an artefact to which the issues of verification and validation apply. Psychological experiments and “toy” ABM are such that they might be accurately reported in a standard article for the purposes of model duplication (though there is also a problem about whether this reporting is adequately performed in practice—see [7]). If code is made available (which it often isn’t) then a replication attempt can be directly checked. But for complicated models involving large scale data it may simply not be realistic to “make the model again” and other methods of establishing correspondence may be necessary [3]. Model alignment, replication and reproduction are important examples of a wider class of model-to-model analysis [8]. This includes two important cases: metamodelling and model comparison. Meta-modelling is where one models an existing model using another—for example, one might model a complex, descriptive ABM with a much simpler individual-based model in order to determine what is and is not essential for producing the same dynamics (e.g. [9]). Model comparison occurs when we relate the outputs of intentionally different models—e.g. in their ability to
The Large-Scale, Systematic and Iterated Comparison …
369
fit certain target sets of data. In this paper we are talking about the latter category of model comparison.
2 The Need for Model Intercomparison The recent COVID-19 crisis has led to a surge of new model development and a renewed interest in the use of models as policy tools. While this is in some senses welcome, the sudden appearance of many new models presents a problem in terms of their assessment, the appropriateness of their application and reconciliation of any differences in outcome. Even if they appear similar, their underlying assumptions may differ, their initial data might not be the same, policy options may be conceptualised in different ways, stochastic effects explored to varying extents, and model outputs presented in any number of different forms. Modelled processes may be disjoint, with some considering social processes, but others focussed purely on epidemiology, for example, and reported outputs may not even be of the same type, with no obvious way to compare them. As a result, it can be unclear what aspects of variations in output between models are the results of mechanistic, parameter or data differences. Any comparison between models is rendered difficult by differences in experimental design and selection of output measures. If we wish to do better, we suggest that a more formal approach to making comparisons between models would be helpful. However, it appears that this is not commonly undertaken in most fields in a systematic and persistent way, except for the field of climate change, and closely related fields such as pollution transport or economic impact modelling (although efforts are underway to extend such systematic comparison to ecosystem models [10, 11]). Examining the way in which this is done for climate models may therefore prove instructive.
3 Some Existing Model Comparison Projects 3.1 Model Intercomparison Projects (MIP) in the Climate Community Formal intercomparison of atmospheric models goes back at least to 1989 [12], with the first atmospheric model inter-comparison project (AMIP), initiated by the World Climate Research Programme. By 1999 this had contributions from all significant atmospheric modelling groups, providing standardised time-series of over 30 model variables for one particular historical decade of simulation, with a standard experimental setup. Comparisons of model mean values with available data helped to reveal overall model strengths and weaknesses: no single model was best at simulating all aspects of the atmosphere, with accuracy varying greatly between simulations. The
370
M. Bithell et al.
model outputs also formed a reference base for further inter-comparison experiments including targets for model improvement and reduction of systematic errors, as well as a starting point for improved experimental design, software and data management standards and protocols for communication and model intercomparison. This led to AMIPII and, subsequently, to a series of Climate model inter-comparison projects (CMIP) beginning with CMIP I in 1996. The latest iteration (CMIP 6) is a collection of 23 separate model intercomparison experiments covering atmosphere, ocean, land surface, geo-engineering, and the paleoclimate. This collection is aimed at the upcoming 2021 IPCC process (AR6). Participating projects go through an endorsement process for inclusion, (a process agreed with modelling groups), based on 10 criteria designed to ensure some degree of coherence between the various models— a further 18 MIPS are also listed as currently active [13]. Groups contribute to a central set of common experiments covering the period 1850 to the near-present. An overview of the process can be found in [14]. The current structure includes a set of three overarching questions covering the dynamics of the earth system, model systematic biases and understanding possible future change under uncertainty. Individual MIPS may build on this to address one or more of a set of 7 “grand science challenges” associated with the climate. Modelling groups agree to provide outputs in a standard form, obtained from a specified set of experiments under the same design, and to provide standardised documentation to go with their models. Originally (up to CMIP 5), outputs were then added to a central public repository for further analysis, however the output grew so large under CMIP6 that now the data is held dispersed over repositories maintained by separate groups.
3.2 Other Examples Firstly, an informal network collating models across more than 50 research groups has already been generated as a result of the COVID-19 crisis—the Covid Forecast Hub [15]. This is run by a small number of research groups collaborating with the US Centre for Disease Control and is strongly focussed on epidemiological aspects. Participants are encouraged to submit weekly forecasts, and these are integrated into a data repository and can be visualised on the website—viewers can look at forward projections, along with associated confidence intervals and model evaluation scores, including those for an ensemble of all models. The focus on forecasts in this case arises out of the strong policy drivers for the current crisis, but the main point is that it is possible to immediately view measures of model performance and to compare the different model types: one clear message that rapidly becomes apparent is that many of the forward projections have 95% (and at some times, even 50%) confidence intervals for incident deaths that more than span the full range of the past historic data. The benefit of comparing many different models in this case is apparent, as many of the historic single-model projections diverge strongly from the data (and
The Large-Scale, Systematic and Iterated Comparison …
371
the models most in error are not consistently the same ones over time), although the ensemble mean tends to be better. As a second example, one could consider the Psychological Science Accelerator (PSA) [16, 17]. This is a collaborative network set up with the aim of addressing the “replication crisis” in psychology: many previously published results in psychology have proved problematic to replicate as a result of small or non-representative sampling or use of experimental designs that do not generalise well or have not been used consistently either within or across studies. The PSA seeks to ensure accumulation of reliable and generalisable evidence in psychological science, based on principles of inclusion, decentralisation, openness, transparency and rigour. The existence of this network has, for example, enabled the reinvestigation of previous experiments but with much larger and less nationally biased samples (e.g. [18]).
4 The Benefits of the Intercomparison Exercises and Collaborative Model Building More specifically, long-term intercomparison projects help to achieve the following. • Build on past effort. Rather than modellers re-inventing the wheel (or building a new framework) with each new model project, libraries of well-tested and documented models, with data archives, including code and experimental design, would allow researchers to more efficiently work on new problems, building on previous coding effort • Aid replication. Focussed long term intercomparison projects centred on model results with consistent standardised data formats would allow new versions of code to be quickly tested against historical archives to check whether expected results could be recovered and where differences might arise, particularly if different modelling languages and approaches (compartmental, system dynamics, ABM) were being used • Help to formalise. While informal code archives can help to illustrate the methods or theoretical foundations of a model, intercomparison projects help to understand which kinds of formal model might be good for particular applications, and which can be expected to produce helpful results for given desired output measures • Build credibility. A continuously updated set of model implementations and assessment of their areas of competence and lack thereof (as compared with available datasets) would help to demonstrate the usefulness (or otherwise) of ABM as a way to represent social systems • Influence Policy (where appropriate). Formal international policy organisations such as the IPCC or the more recently formed IPBES are effective partly through an underpinning of well tested and consistently updated models. As yet it is difficult to see whether such a body would be appropriate or effective for social systems, as we lack the background of demonstrable accumulated and well tested model results.
372
M. Bithell et al.
5 Lessons for ABM? What might we be able to learn from the above, if we attempted to use a similar process to compare ABM policy models? 1.
2.
3.
4.
The projects started small and grew over time: it would not be necessary, for example, to cover all possible ABM applications at the outset. On the other hand, the latest CMIP iterations include a wide range of different types of model covering many different aspects of the earth system, so that the breadth of possible model types need not be seen as a barrier. There are several good arguments (current interest, policy relevance, intellectual challenge, plentiful “raw material”) for using pandemic policy as a demonstrator for this approach which could then—or in parallel—be expanded to other “significant” areas of ABM like opinion dynamics and land use modelling. The climate inter-comparison project has persisted for about 30 years—over this time many models have come and gone, but the history of inter-comparisons allows for an overview of how well these models have performed over time— data from the original AMIP I models is still available on request, supporting assessments concerning long-term model improvement. Although climate models are complex—implementing a variety of different mechanisms in different ways—they can still be compared by use of standardised outputs, and at least some (although not necessarily all) have been capable of direct comparison with empirical data. Thus the approach proposed here (unlike strict replication) is not limited to the analysis of models simple enough to be well described in a single article. An agreed experimental design and public archive for documentation and output that is stable over time is needed; this needs to be done via a collective agreement among the modelling groups involved so as to ensure a long-term buy-in from the community as a whole, so that there is a consistent basis for long-term model development, building on past experience. This may mean a degree of compromise between groups on what models include and report, so as to avoid problems models that are in fact conceptually incompatible, or with outputs that are incommensurable.
The community has already established a standardised form of documentation in the ODD protocol and is working on further standardisation as methodology develops, in empirical modelling for example [7]. Sharing of model code is also becoming routine, and can be easily achieved through COMSES, Github or similar. The sharing of data in a long-term archive may require more investigation. As a starting project COVID-19 provides an ideal opportunity for setting up such a model inter-comparison project—multiple groups already have running examples, and a shared set of outputs and experiments should be straightforward to agree on. (There will also be huge amounts of data, for example on policy across countries, from which to devise effective comparisons. This also suggests novel research designs. Can a model fitted on one country predict what happened in another for example?)
The Large-Scale, Systematic and Iterated Comparison …
373
This would potentially form a basis for forward looking experiments designed to assist with possible future pandemic problems (including the need for novel forms of data about things like dynamic contact), and a basis on which to build further features into the existing disease-focussed modelling, such as the effects of economic, social and psychological issues.
6 Additional Challenges for ABMs of Social Phenomena Nobody supposes that modelling social phenomena is going to have the same set of challenges that climate change models face. Some of the differences include: • The availability of good data. Social science is bedevilled by a paucity of the right kind of data. Although an increasing amount of relevant data is being produced, there are commercial, ethical and data protection barriers to accessing it and the data rarely concerns the same set of actors or events. • The understanding of micro-level behaviour. Whilst the micro-level understanding of our atmosphere is very well established, that of the behaviour of the most important actors (humans) is not. However, it may be that better data (or more attention to the appropriate use of qualitative methods) might partially substitute for a generic behavioural model of decision-making. • Agreement upon the goals of modelling. Although there will always be considerable variation in terms of what is wanted from a model of any particular social phenomena, a common core of agreed objectives will help focus any comparison and give confidence via ensembles of projections. Although the MIPs and Covid Forecast Hub are focussed on prediction, it may be that empirical explanation may be more important in other areas. An additional challenge here is a “higher level” agreement that whatever aims a model has, these should be subject to scientific assessment. Not all ABM should be empirical necessarily, but empirical ABM do have a clear methodology which some other approaches seem to lack. • The available resources. ABM projects tend to be add-ons to larger endeavours and based around short-term grant funding. The funding for big ABM projects is yet to be established, not having the equivalent of weather forecasting to piggyback on. In fact, there may be a Catch-22 here. In order to show what ABM can achieve (and render it suitable for significant ongoing funding by interested policy makers) it may have to self-organise the sort of project that would normally require significant ongoing funding. • Persistence of modelling teams/projects. ABM tends to be quite short-term with each project developing a new model for a new project. This has made it hard to keep good modelling teams together. • Deep uncertainty. Whilst the set of possible factors and processes involved in a climate change model are well established, the basis on which particular mechanisms should feature in a model of any particular social phenomena is currently unclear (but see [19] for an preliminary attempt to investigate this issue). To take
374
M. Bithell et al.
a relevant example from COVID-19, network approaches are clearly important to many social interactions (visiting the houses of others) but not to all (whether you catch COVID-19 in a shop). While the “compartmental” approach is good at representing the transitions of individuals through disease states it fails to allow for the fundamentally social (not physiological) role of agency and policy in changing some of these transitions. Unfortunately, deep disagreements about the assumptions which models require are often bundled with the agendas of different research methods and modelling approaches and the arbitrary exclusion of mechanisms on theoretical or technical grounds can lead to sharp divergences in outcome. Whilst uncertainty in known mechanisms can be quantified, assessing the impact of deep uncertainty and the risk of this kind of mis-specification is much harder. • The sensitivity of the political context. Even in the case of Climate Change, where the assumptions made are relatively well understood and can be justified on objective bases, the modelling exercise and its outcomes can be politically contested. In other areas, where the representation of people’s behaviour might be key to model outcomes, this challenge will need even more attention [20]. However, some of these problems were solved in the case of Climate Change as a result of the CMIP exercise itself and the reports it ultimately resulted in. Over time the development of the models also allowed for a broadening and updating of modelling goals, starting from a relatively narrow initial set of experiments. Ensuring the persistence of individual modelling teams is easier in the context of an internationally recognised comparison project, because resources may be easier to obtain, and there is a consistent central focus. The modelling projects became longer-term as individual researchers could establish a career just doing climate change modelling and the importance of this work increasingly recognised as it became more obviously successful. An ABM modelling comparison project might help solve some of these problems as the importance of its work is established but it would require us to stop telling people that ABM is important and show them it is.
7 Towards an Initial Proposal Clearly, there are a number of things that could increase the rigour, and hence the reliability, of agent-based modelling. These include better (standardised and more comprehensive) documentation of different aspects of models [21–23], free access to the source code of simulations [24, 25], the organised reproduction of important models [3, 6], being clearer about how data is used [7] as well as the purpose of a model [26], and a “reproduction first” system [6]. However, as argued above, we also need coordinated, large-scale, systematic and persistent model comparison projects. In this section we sketch what this might look like. The topic chosen for this project should be something where there: (a) is potentially enough public interest to justify the effort, (b) there are a number of models with a similar purpose in mind being developed. At the current stage, this suggests
The Large-Scale, Systematic and Iterated Comparison …
375
dynamic models of COVID-19 spread, although this may not be a topic of interest for much longer: Whether COVID-19 is a “short term” issue is debatable given the current course of the virus, long-covid, vaccine escape, evolution of new variants and the high level of infection world-wide (and arguably still requires careful long-term modelling). However, the level of current interest among policy makers does not alleviate the responsibility to make models that are capable of representing events that might become policy relevant: it has been suggested for years, for example, that a pandemic was coming, (see e.g. [27]) but when it appeared, little was ready to go by way of models that could deal with the social or economic aspects (and indeed this still seems to be the case, even after 2 years of pandemic). Policy makers continue to show little interest in addressing issues that could cause further global crises (global poverty, the biodiversity crisis, chemical pollution, planetary boundaries in general, or whether exponential economic growth is possible or sensible on a finite planet), but this makes it more, rather than less urgent, to have credible models well-tested models available. Even without considering global crises, there are other possibilities including: transport models (where people go and who they meet) or criminological models (where and when crimes happen). Whichever ensemble of models is chosen, these models should be compared using a core of standards: • The same start and end dates (but not necessarily the same temporal granularity) • Covering the same set of regions or cases • Using the same population data (though possibly enhanced with extra data and maybe scaled population sizes) • With the same initial conditions in terms of the population • Outputting a core of agreed measures (but maybe others as well) • Checked against their agreement with a core set of cases (with agreed data sets) • Reported in a standard format (though with a discussion section for further/other observations) • Well documented and with code that is open access • Run a credible of times with different random seeds Any modeller/team that had a suitable model and was willing to adhere to the rules would be welcome to participate (commercial, government or academic) and these teams would collectively decide the rules and development of the exercise (along with writing any reports on the resulting comparisons). Other interested stakeholder groups could be involved including professional/academic associations, NGOs and government departments but in a consultative role providing wider critique. It is important that the framework and reports from the exercise be independent or any particular interest or authority.
376
M. Bithell et al.
8 Conclusion We call upon those who think ABMs have the potential to usefully inform policy decisions to work together, in order that the transparency and rigour of our modelling matches our ambition. Whilst model comparison exercises of various kinds are important for any simulation work, particular care needs to be taken when the outcomes can affect people’s lives. Let us get our act together! Acknowledgements This paper is an expanded version of [28].
References 1. Maxwell, S.E., Lau, M.Y., Howard, G.S.: Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? Am. Psychol. 70(6), 487–498 (2015). https://doi. org/10.1037/a0039400 2. Axtell, R., Axelrod, R., Epstein, J. M., Cohen, M. D.: Aligning simulation models: a case study and results. Computat. Math. Organiz. Theory 1(2), 123–141 (1996). https://link.springer.com/ article/10.1007%2FBF01299065 3. Chattoe-Brown, E., Gilbert, N., Robertson, D. A., Watts, C. J.: Reproduction as a means of evaluating policy models: a case study of a COVID-19 simulation. medRxiv 01.29.21250743 (2021). https://doi.org/10.1101/2021.01.29.21250743 4. Edmonds, B., Hales, D.: Replication, replication and replication: some hard lessons from model alignment. J.Artif. Societ. Soc. Simul. 6(4), 11 (2003). http://jasss.soc.surrey.ac.uk/6/4/11.html 5. Riolo, R.L., Cohen, M.D., Axelrod, R.: Evolution of cooperation without reciprocity. Nature 414(6862), 441–443 (2001). https://doi.org/10.1038/35106555 6. Hales, D.: Vision for a more rigorous “replication first” modelling journal. Rev. Artif. Societ. Soc. Simul, 5 Nov (2018). https://rofasss.org/2018/11/05/dh/ 7. Siebers, P.O., Achter, S., Bernardo, P., Cristiane, B., Melania and Chattoe-Brown, E.: First Steps Towards RAT: a protocol for documenting data use in the agent-based modeling process (Extended Abstract), in Ahrweiler, Petra and Neumann, Martin (eds.) Advances in Social Simulation: ESSA 2019, Springer Proceedings in Complexity (Cham: Springer), pp. 257–261 (2021). https://doi.org/10.1007/978-3-030-61503-1_24 8. Hales, D., Rouchier, J., Edmonds, B.: Model-to-model analysis. J. Artif. Societ. Soc. Simul. 6(4), 5 (2003). http://jasss.soc.surrey.ac.uk/6/4/5.html 9. Lafuerza, L.F., Dyson, L., Edmonds, B., McKane, A.J.: Staged models for interdisciplinary research. PLoS ONE 11(6), e0157261 (2016). https://doi.org/10.1371/journal.pone.0157261 10. Tittensor, D. P., Eddy, T. D., Lotze, H. K., Galbraith, E. D., Cheung, W., Barange, M., Blanchard, J. L., Bopp, L., Bryndum-Buchholz, A., Büchner, M., Bulman, C., Carozza, D. A., Christensen, V., Coll, M., Dunne, J. P., Fernandes, J. A., Fulton, E. A., Hobday, A. J., Huber, V., … Walker, N. D.: A protocol for the intercomparison of marine fishery and ecosystem models: Fish-MIP v1.0. Geoscient. Model Dev. 11(4), 1421–1442 (2018). https://doi.org/10.5194/gmd-11-14212018 11. Wei, Y., Liu, S., Huntzinger, D.N., Michalak, A.M., Viovy, N., Post, W.M., Schwalm, C.R., Schaefer, K., Jacobson, A.R., Lu, C., Tian, H., Ricciuto, D.M., Cook, R.B., Mao, J., Shi, X.: The north american carbon program multi-scale synthesis and terrestrial model intercomparison project - Part 2: environmental driver data. Geoscient. Model Dev. 7(6), 2875–2893 (2014). https://doi.org/10.5194/gmd-7-2875-2014
The Large-Scale, Systematic and Iterated Comparison …
377
12. Gates, W.L., Boyle, J.S., Covey, C., Dease, C.G., Doutriaux, C.M., Drach, R.S., Fiorino, M., Gleckler, P.J., Hnilo, J.J., Marlais, S.M., Phillips, T.J., Potter, G.L., Santer, B.D., Sperber, K.R., Taylor, K.E., Williams, D.N.: An overview of the results of the atmospheric model intercomparison project (AMIP I). In Bull. Am. Meteorol. Soc. 80(1), 29–55 (1999). https:// doi.org/10.1175/1520-0477(1999)080%3c0029:AOOTRO%3e2.0.CO;2 13. Climate model inter-comparison project 6. https://www.wcrp-climate.org/wgcm-cmip/wgcmcmip6. Last Accessed 19 May 2021 14. Eyring, V., Bony, S., Meehl, G.A., Senior, C.A., Stevens, B., Stouffer, R.J., Taylor, K.E.: Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geoscient. Model Dev. 9(5), 1937–1958 (2016). https://doi.org/10.5194/ gmd-9-1937-2016 15. Covid Forecast Hub. https://covid19forecasthub.org. Last Accessed 19 May 2021 16. Moshontz, H. + 85 others. The psychological science accelerator: advancing psychology through a distributed collaborative network 1(4) 501–515 (2018). https://doi.org/10.1177/251 5245918797607 17. Psychological Science Accelerator. https://psysciacc.org/. Last Accessed 19 May 2021 18. Jones, B.C., DeBruine, L.M., Flake, J.K., et al.: To which world regions does the valence– dominance model of social perception apply? Nat. Human Behav. 5, 159–169 (2021). https:// doi.org/10.1038/s41562-020-01007-2 19. Chattoe-Brown, E.: ‘Why questions like “Do Networks Matter?” Matter to methodology: how agent-based modelling makes it possible to answer them’, Int. J. Soc. Res. Methodol. (online first 2020). https://doi.org/10.1080/13645579.2020.1801602 20. Aodha, L., Edmonds, B.: Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B., Meyer, R. (eds.) Simulating Social Complexity—A handbook, 2nd edition. Springer, pp. 801–822 (2017). https://doi.org/10.1007/978-3-319-66948-9_29 21. Grimm, V., Berger, U., Bastiansen, F., Eliassen, S., Ginot, V., Giske, J., Goss-Custard, J., Grand, T., Heinz, S.K., Huse, G., Huth, A., Jepsen, J.U., Jørgensen, C., Mooij, W.M., Müller, B., Pe’er, G., Piou, C., Railsback, S.F., Robbins, A.M., Robbins, M.M., Rossmanith, E., Rüger, N., Strand, E., Souissi, S., Stillman, R.A., Vabø, R., Visser, U., DeAngelis, D.L.: A standard protocol for describing individual-based and agent-based models. Ecol. Model. 198(1–2), 115–126 (2006). https://doi.org/10.1016/j.ecolmodel.2006.04.023 22. Grimm, V., Augusiak, J., Focks, A., Frank, B. M., Gabsi, F., Johnston, A. S., ... Railsback, S. F.: Towards better modelling and decision support: documenting model development, testing, and analysis using TRACE. Ecol. Modell. 280, 129–139 (2014). https://doi.org/10.1016/j.eco lmodel.2014.01.018 23. Grimm, V., Railsback, S. F., Vincenot, C. E., Berger, U., Gallagher, C., DeAngelis, D. L., Edmonds, B., Ge, J., Giske, J., Groeneveld, J„ Johnston, A.S.A., Milles, A., Nabe-Nielsen, J., Polhill, J. G., Radchuk, V., Rohwäder, M-S., Stillman, R. A., Thiele, J. C., Ayllón, D. The ODD protocol for describing agent-based and other simulation models: a second update to improve clarity, replication, and structural realism. J. Artif. Societ. Soc. Simul. 23(2), 7 (2020). http:// jasss.soc.surrey.ac.uk/23/2/7.html. https://doi.org/10.18564/jasss.4259 24. Polhill, J. G., Edmonds, B.: Open access for social simulation. J. Artif. Societ. Soc. Simul. 10(3), 10 (2007). http://jasss.soc.surrey.ac.uk/10/3/10.html 25. Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., ... Gilbert, N.: Computational models that matter during a global pandemic outbreak: a call to action. J. Artif. Societ. Soc. Simul. 23(2), 10 (2020). http://jasss.soc.surrey.ac.uk/23/2/10.html. https://doi.org/ 10.18564/jasss.4298 26. Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., MontañolaSales, C., Ormerod, P., Root, H., Squazzoni, F.: Different modelling purposes. J. Artif. Societ. Soc. Simul. 22(3), 6 (2019). http://jasss.soc.surrey.ac.uk/22/3/6.html. https://doi.org/10.18564/ jasss.3993
378
M. Bithell et al.
27. Patterson, M. M.: The coming influenza pandemic: lessons from the past for the future. J. Osteopathic Med. 105(11), 498–500 (2005). https://doi.org/10.7556/jom_2005_11.0001 28. Bithell, M., Edmonds, B.: The systematic comparison of agent-based policy models—It’s time we got our act together! Review of Artificial Societies and Social Simulation, 11th May 2021 (2020). https://rofasss.org/2021/05/11/SystComp/
Simulating Delay in Seeking Treatment for Stroke Due to COVID-19 Concerns with a Hybrid Agent-Based and Equation-Based Model Elizabeth Hunter , Bryony L. McGarry , and John D. Kelleher
Abstract COVID-19 has caused strain on healthcare systems worldwide and concern within the population over this strain and the chances of becoming infected has reduced the likelihood of people seeking medical treatment for other health events. Stroke is a medical emergency and swift treatment can make a difference in outcomes. Understanding how concern over the COVID-19 pandemic impacts the time delay in seeking treatment after a stroke can help understand both the long-term cost implications and how to target individuals to remind them of the importance of seeking treatment. We present an agent-based model to simulate the delay in seeking treatment for stroke due to concerns over COVID-19 and show that small changes in behaviour impact the average delay in seeking treatment. We find that introducing control measures and having multiple smaller peaks of the pandemic results in less delay in seeking treatment compared to a scenario with one large peak. Keywords Agent-based model · Hybrid model · Stroke · COVID-19
1 Introduction The COVID-19 pandemic has a wider impact than just those infected by the virus. All parts of society have been affected, and it has been shown in many countries that there are additional excess deaths during the pandemic that are not directly explained by the deaths due to COVID-19 infection [2, 26]. Thus, the full impact of the pandemic on the health care system is yet to be determined. There is speculation that excess deaths may be attributable to a delay in seeking treatment of many non-communicable diseases due to concerns over hospital overcrowding and being exposed to the virus E. Hunter (B) · B. L. McGarry · J. D. Kelleher PRECISE4Q Predictive Modelling in Stroke, Technological University Dublin, Dublin, Ireland e-mail: [email protected] B. L. McGarry School of Psychological Science, University of Bristol, Bristol, UK J. D. Kelleher ADAPT Research Centre, Technological University Dublin, Dublin, Ireland © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_29
379
380
E. Hunter et al.
leading to a ‘watch-and-wait’ approach [10, 17]. Here we focus on the impact of COVID-19 on treatment seeking behaviour of stroke sufferers, as it is one disease where minimising the time from symptom onset to treatment is particularly crucial to the patient’s outcome [19]. Worldwide, stroke is the second leading cause of death, a major cause of adult disability [13] and one of the most expensive neurological conditions [23]. Ischaemic strokes are the most common [13] and are defined as an episode of neurological dysfunction caused by focal cerebral infarction [24]. Early treatment has been associated with better outcomes [9, 16]. Stroke caused by intracerebral haemorrhage (ICH) is an episode of neurological dysfunction due to a non-traumatic bleed within the brain or ventricular system [24]. Rapid care has been associated with improved case-fatality rates [21, 22]. Both ischaemic and stroke caused by ICH are considered neurological emergencies and require fast diagnosis and treatment to minimise the short and long-term health impacts [21]. Estimating the time delay in seeking treatment across stroke patients caused by concerns about COVID-19 highlights the cost of COVID-19 on stroke treatment. These include COVID-19’s effect on direct costs such as acute stroke care and long-term hospitalisation and treatment (medication, physiotherapy), rehabilitation, reintegration and quality of life and indirect costs, such as productivity loss from unemployment, informal caregiving and premature mortality [7, 14]. In this paper, we use a hybrid agent-based and equation-based model to simulate the delays in seeking medical care for a stroke due to concern about the COVID19 pandemic. In the following sections, we describe the model, then discuss the experiments run with the model and finally, discuss the results.
2 Model At its core, the model is an agent-based model that simulates stroke incidence within a population to determine the delay in seeking stroke treatment. Agent-based models for infectious disease spread have four main components, Society, Environment, Disease and Transportation [12]. Although stroke is not an infectious disease our model also has those four main components. The next sections describe the components in more detail.
2.1 Society The society component uses Irish census data [4] to accurately simulate the age and sex distribution of those 50 and older in Ireland.1 We choose those 50 and older as the majority of strokes in Ireland occur in those over the age of 65 [20] and models 1
Those age 50 and older make up approximately 30.4% of the Irish population.
Simulating Delay in Seeking Treatment for Stroke Due to COVID-19 … Table 1 Age and sex breakdown of agents in the model Age group Male Under 65 65 to 79 Over 80
400,768 238,579 58,258
381
Female 408,125 250,396 90,334
predicting stroke and cardiovascular risk often start at the ages of 40 or later [3, 5]. Each agent has both an age and a sex. Table 1 shows the number of agents in three age groups (under 65, 65 to 79, and over 80) by sex.
2.2 Environment The environment component of our model is an equation-based component. Typically the environment component of an agent-based model for infectious disease spread represents a geography that agents move through. However, instead of representing the geography of a region, here we consider the environment to be cases of COVID19. We do not consider the COVID-19 disease status of the agents in the model, but each agent knows the number of people who have recently tested positive for COVID-19 in the country. As the number of cases increases so to does the agent’s concern regarding the pandemic and their hesitancy in seeking medical treatment post stroke. Concern is discussed more in Sect. 2.4. To determine the number of COVID-19 cases, we use a difference equation model based on the Irish SEIR population level model [8]. We use difference equations instead of differential equations because the difference equations use discrete time steps and are thus more analogous to the agent-based model. Each time step in the model is a day and for each time step the difference equations determines the number of agents who have tested positive. This number informs the agents of the current state of the pandemic in Ireland and factors into their decision on waiting to go to the hospital after having a stroke. Although the agents only represent the population in Ireland over age 50, the COVID-19 SEIR model is run for all of Ireland. The initial conditions are set to roughly mimic the start of the pandemic when there are only a few agents infected in the country. The model starts with 4,937,769 susceptible agents; 6 exposed agents; 12 infectious agents and 0 recovered agents. Of the 12 infectious, 1 is pre-symptomatic, 5 are asymptomatic, 1 is isolating, 2 are waiting for tests and isolating, 2 have tested positive and 1 is not isolating.
382
E. Hunter et al.
Table 2 Absolute risk of stroke by age and sex Age group Male (%) Under 65 65 to 79 Over 80
0.1 0.4 0.9
Female (%) 0.05 0.2 0.7
2.3 Disease The disease component of the model determines the stroke incidence in the population. This component determines if an agent will have a stroke or not on a day within the model and has been created so that the incidence of stroke in the agent population matches stroke incidence in the actual population. We use an estimate of absolute stroke risk for agents. This risk is calculated for the six different groups of agents outlined in Table 1. Absolute risk is defined as the number of events, in this case, strokes, in the group divided by the total number of people in the group. To get the total number of people in each of our six groups, we use Irish Census data, and to get the number of strokes in each group, we use statistics from the National Stroke Register Report 2018 [20], which provides the total number of strokes in Ireland in 2018 that were males or females and then the percent of strokes in the three age groups (less than 65, 65 to 79 and over 80). Table 2 shows the absolute risk for the six age and sex categories within a year. To determine the risk of stroke on a given day, we divide the yearly risk by 365. Then at each time step, each agent samples a number from a uniform probability distribution between 0 and 1. If the number sampled is less than the daily risk of stroke for their demographic, the agent will have a stroke.
2.4 Transportation The transportation component of the model determines the delay in the agents who have had a stroke arriving at the hospital for treatment. After a stroke, an agent chooses one of four behaviours (Self Treat, Wait and See, Seek Advice, Seek Medical Advice) and each behaviour results in a different average time to hospital arrival. The behaviours, the resulting time delays in seeking treatment, and the percent of strokes that follow each behaviour are determined from [18]. Table 3 shows the baseline percent chance that an agent who had a stroke will follow each of the four behaviours and the resulting time to reach the hospital. After a stroke, an agent decides which behaviour they will take and this determines their personal delay to treatment time. To make this decision an agent samples a number from a uniform probability distribution between 0 and 1. They then use thresholds based on the percents in
Simulating Delay in Seeking Treatment for Stroke Due to COVID-19 …
383
Table 3 Post stroke behaviours and time to reach the hospital [18] Behaviour Percent Time Self treat Wait and see Seek advice Seek medical advice
6 32 12 50
11.5 13.25 4.25 2
Table 4 Thresholds for different level of agent concern Number of cases Concern Number of cases Less than 50 50 to 100 100 to 200 200 to 1000 1000 to 3000
0 5 5.6 6 7
3000 to 10000 10000 to 15000 15000 to 20000 20000 to 21000 21000 plus
Concern 7.2 7.3 7.4 7.5 8
Table 3 to determine which behaviour they will choose. If the random-number is less than 0.06 they will self-treat; if it is greater than 0.06 and less than 0.38 they will wait and see; if it is greater than 0.38 and less than 0.50 they will seek advice and if it is greater than 0.50 they will seek medical advice. The probabilities that an agent will choose each of the behaviours presented here are for the baseline scenario and will change based on the level of concern an agent has over the number of COVID-19 cases in the environment. As the number of cases of COVID-19 increase, agents become more concerned about the pandemic. The concern levels of agents are based on the Amárach public health surveys in Ireland [1]. Concern levels from the survey show higher levels of worry occurring around the peaks of the different waves of the pandemic. Table 4 list the thresholds for changing the mean concern level in the model. We do not assume that all agents have the same level of concern. Agents are assigned a concern level using a normal distribution with the mean concern level from Table 4 and a standard deviation of 0.5. The higher the concern level of the agent, the less likely they are to seek medical advice and the more likely they are to self-treat, wait and see, or seek non-medical advice. Agents behaviours and concern are not related to demographics with all agents equally likely to select a given behaviour or concern.
2.5 Schedule The model runs on discrete time steps with each time step equating to a single day in the model. At each time step the COVID-19 model determines the number of people
384
E. Hunter et al.
who have tested positive in the country, the agents determine their level of concern, if they had a stroke on that day and how long the delay was before they reached the hospital.
3 Experiment To look at how the level of agent concern impacts the average delay time in seeking treatment after a stroke we run a number of different scenarios. 1. A baseline scenario with no COVID-19 cases. 2. A scenario with no COVID-19 restrictions and there is one peak of cases. 3. A scenario with rolling lockdowns that results in multiple peaks of cases. Scenarios 2 and 3 are run twice to account for different levels of behaviour change based on concern. One low level of behaviour change where agents are only slightly less likely to seek medical advice, and one high-level change where agents are much less likely to seek medical advice. In all three scenarios agents choose their behavioural response to a stroke using the method discussed in the transportation section. What varies between the scenarios is the probability of each behaviour. Table 5 shows the probability an agent will choose a behaviour based scenario and their concern. In this table, each cell in the behaviour columns records the probability of the behaviour, and (in brackets) the thresholds defining the interval that the random number an agent samples must fall within in order the agent to adopt that behaviour. For each scenario, the model is run 25 times to account for stochasticity in the model. The number of runs was determined using the method in [11].
4 Results The following sections discuss the results from the different experiments. For each scenario, we look at the average delay in seeking treatment for stroke over the year. The average delay is examined in relation to the number of COVID-19 cases and the average concern across all agents in the model for a given day. Finally, we compare all of the scenarios to look at the potential increase in delay in treatment due to concern about the COVID-19 pandemic.
4.1 Baseline Scenario The baseline scenario for our model is a situation where there are no background cases of COVID-19, as such, the agents do not have any concern about hospital overcrowding or COVID-19 infection at the hospital. Thus, agents follow the post-stroke
Simulating Delay in Seeking Treatment for Stroke Due to COVID-19 …
385
Table 5 Probability (and sample interval) for behaviour by scenario and concern Scenario Concern Self care Wait & see Non-medical Medical advice advice Low level of behavioural change with concern
High level of behavioural change with concern
7
0.09 (0, 0.09]
7
0.12 (0, 0.12]
0.14 (0.42, 0.56] 0.16 (0.46, 0.62] 0.18 (0.50, 0.68]
0.44 (0.56, 1)
6
0.34 (0.08, 0.42] 0.36 (0.10, 0.46] 0.38 (0.12, 0.50]
0.45 (0.55, 1) 0.41 (0.59, 1) 0.50 (0.50, 1)
0.38 (0.62, 1) 0.32 (0.68, 1)
behaviours and timing in Table 3. To look at the delay in seeking stroke treatment, we first look at the average delay across a whole model run or a whole year. For each run, we find the average delay across all agents who have had a stroke during the year and then take the average of that across the 25 model runs. Similarly, we find the median delay for all agents who have had a stroke during the year and then take the average of the medians across the 25 model runs. Table 6 shows the average median delay for seeking treatment and the average delay for seeking treatment across the 25 runs, as well as the maximum and minimum values and standard deviation. From the table, we can see that while there is a higher average delay, the median is lower, showing that 50% of agents have stroke treatment within 2.9 h. This makes sense based on the agents’ responses to stroke determined from Table 3 where 50% of agents should seek medical care upon having a stroke which would result in a delay of approximately 2 h to treatment.
4.2 Single Peak The next scenario we look at is a scenario that has a single high peak of COVID-19 cases. This scenario would correspond to a real world situation where no intervention
386
E. Hunter et al.
Table 6 Delay in seeking stroke treatment in the baseline scenario Average delay (hrs) Median delay (hrs) 6.4 6.7 6.2 0.14
2.9 2.0 4.3 0.80
6
0
0
2
4
Concern
100000 50000
Number of Agents
150000
8
Average Maximum Minimum Standard deviation
0
100
200
300
0
100
(a) Total tested positive per day.
200
300
Day
Day
(b) Average level of concern.
Fig. 1 Total cases tested positive by day and average concern by day
measures were taken to slow the spread of COVID-19. Figure 1a shows the total number of cases tested positive for COVID-19 in this scenario, and Fig. 1b shows the average level of concern across all agents due to COVID-19 in the scenario. The high number of cases and high concern in the first 100 days of the year results in agents adapting their behaviours upon having a stroke. To account for different levels of behaviour change, we look at both a high and low behaviour change scenario. The corresponding changes in behaviour for high and low impact were previously discussed in Table 5. Table 7 shows the average delay in seeking treatment and the median delay in seeking treatment across the 25 runs for both the high and low impact scenarios. From the table, we can see that there is a slight increase in the average delay across the runs and average of the median delay across the runs going from a scenario where concern over COVID-19 has a low impact on behaviours post-stroke versus a higher impact.
4.3 Multiple Peaks The next scenario that is more realistic in terms of COVID-19 cases represents a set of rolling restrictions that lead to multiple peaks in cases of COVID-19 throughout the year. The timing and size of the peaks were set to mimic the cases that occurred in Ireland during 2020. Figure 2a shows the total number of cases tested positive for
Simulating Delay in Seeking Treatment for Stroke Due to COVID-19 …
387
Table 7 Delay in seeking stroke treatment with a single peak of COVID-19 Behaviour Average Maximum Minimum change 6.9 6.7 4.2 4.1
6.6 6.5 4.0 3.6
7.3 6.9 4.5 4.5
0.14 0.10 0.12 0.18
4
Concern
6000 0
0
2000
2
4000
Number of Agents
8000
6
10000
12000
Average delay High Low Median delay High Low
Standard deviation
0
100
200
300
0
100
200
300
Day
Day
(b) Average concern.
(a) Total tested positive per day.
Fig. 2 Total cases tested positive by day and average concern by day Table 8 Delay in seeking stroke treatment with multiple peaks of COVID-19 Behaviour Average Maximum Minimum change Average delay High Low Median delay High impact Low Impact
6.8 6.6 4.2 4.2
6.6 6.4 4.0 3.8
7.3 6.9 4.5 4.4
Standard deviation 0.17 0.11 0.14 0.15
COVID-19 in this scenario, and Fig. 2b shows the average level of concern across all agents due to COVID-19 in the scenario. With multiple infection peaks, we see a sustained high level of concern for most of the year. However, while the concern levels in the scenario with one peak reach a maximum of about 8 in Fig. 1b the concern in the scenario with multiple peaks has a maximum of just under 7. Similar to the scenario with a single peak, we look at both a high and low behaviour change scenario. Table 8 shows the average delay in seeking treatment and the median delay in seeking treatment across the 25 runs for both the high and low impact scenarios. From the table, we can see that the high and low impact on behaviours seems to have a larger impact on the average delay in seeking stroke treatment, whereas the median delay is not as impacted.
388
E. Hunter et al.
Table 9 Change in delay between the baseline and COVID-19 scenarios Behaviour change Percent change p-value Single peak Multiple peaks
High Low High Low
7.8 4.7 6.3 3.1
2.2e−16 1.6e−10 1.9e−13 5.0e−8
4.4 Comparison of Scenarios To compare the scenarios, we look at the percent change in the delay of seeking stroke treatment between the baseline and each of the other scenarios and do a onesided t-test to determine if the average delay in treatment is greater in the COVID-19 scenarios compared to the baseline. Table 9 shows the percent change and p-value for each of the four scenarios discussed in the previous sections. Looking at the table, we can see that for all of the t-tests comparing the average delay to the baseline, we have p-values that are much less than 0.05, showing that the average delays in the scenarios where agents change their behaviour due to COVID19 are significantly greater than the average delay in the baseline scenario with no COVID-19 cases. Additionally, we see that even though the single peak scenario does not have sustained concern throughout the year as the multiple peaks scenario does, the higher level of concern during the single peak leads to a higher percent change in delay from the baseline. As expected for both the single peak and the multiple peaks scenarios, the versions where concern has a higher impact on agents behaviour results in a larger difference from the baseline compared to when concern has a lower impact on behaviour.
5 Conclusion Our model results show that if stroke patients change their behaviours in seeking treatment after stroke symptoms due to concern over hospital capacity or the possibility of COVID-19 infection, there could be a significant increase in the delay in patients arriving at the hospital and thus a delay in receiving the appropriate treatment. This is not to be confused with delay in acute care having arrived at the hospital, which may vary according to the hospital site. However, from a multi-centre study [10], time-to-treatment during the pandemic in acute stroke care did not differ from time-to-treat prior to the pandemic from pre-pandemic timing. The results show that even though the single peak scenario results in no cases and low concern after about 100 days, the higher levels of concern due to higher cases at the peak results in more delay in seeking treatment. This suggests that introducing measures to control the pandemic will not only save lives lost to COVID-19 but might also save lives lost to stroke.
Simulating Delay in Seeking Treatment for Stroke Due to COVID-19 …
389
It is important to note that the model is not intended to predict the actual hours that individuals who have had a stroke are delayed in seeking treatment but to show the possible impact on delays. We see that even in scenarios with low behaviour change there is still an impact on the delay in seeking treatment. As it is essential to treat stroke as rapidly as possible to ensure the best outcomes, even a small delay in time when seeking treatment could have a significant influence on outcomes which may result in higher post-stroke medical costs. Our results highlight the importance of considering not just the impact of COVID-19 cases on the healthcare system but also on the long term impacts of the pandemic on stroke outcomes. In future scenarios where there are concerns about hospital capacity it may be necessary to target individuals at risk of stroke with information about the importance of seeking rapid medical care. Our model does not take into account any factors beyond a patient’s age, gender, and the cases of COVID-19 in the background. We see the model as a proof of concept and first step in simulating treatment delays. Future work on the model could include additional factors such as distance to a hospital and other risk factors, such as smoking or diabetes that impact an agent’s risk of stroke. Additionally, network effects and threshold effects could be included. If members of an agent’s network were infected with COVID-19 their concern may increase, and agent’s concern thresholds could vary based on an agent’s characteristics. Delays in seeking treatment may also result from social distancing measures introduced during the COVID pandemic [10]. In pre-pandemic times 96% of stroke emergency calls were activated by caregivers and witnesses [6, 25]. Instructions to isolate means there is less opportunity for the symptoms to be witnessed [10]. Before the COVID-19 pandemic, stroke sufferers’ delays in seeking treatment resulted in worse functional outcomes [19]. This may be especially so during a pandemic, as patients with severe stroke would require longer hospitalisation, potentially increasing their exposure to in-hospital pathogens and placing further constraints on hospital resources [15]. Funding This project received funding from the EU’s Horizon 2020 research and innovation programme under grant agreement No. 777107, and by the ADAPT Centre for Digital Content Technology funded under the SFI Research Centres Programme (Grant 13/RC/2106_P2) and cofunded under the European Regional Development Funds.
References 1. Amárach Research.: Public opinion tracking research: 10/05/21. Department of Health (2020). https://www.gov.ie/en/collection/6b4401-view-the-amarach-public-opinion-survey/. Accessed 13 May 2021 2. Brant, L.C.C., et al.: Excess of cardiovascular deaths during the covid-19 pandemic in Brazilian capital cities. Heart 106(24), 1898–1905 (2020). https://doi.org/10.1136/heartjnl-2020317663, https://heart.bmj.com/content/106/24/1898
390
E. Hunter et al.
3. Conroy, R., et al.: Estimation of ten-year risk of fatal cardiovascular disease in Europe: the SCORE project. Eur. Hear. J. 24(11), 987–1003 (2003). https://doi.org/10.1016/S0195668X(03)00114-3 4. CSO: Census 2011 boundary files (2014). http://www.cso.ie/en/census/ census2011boundaryfiles/. Accessed 26 May 2016 5. D’Agostino, R.B., Wolf, P.A., Belanger, A.J., Kannel, W.B.: Stroke risk profile: adjustment for antihypertensive medication. The Framingham Study. Stroke 25(1), 40–43 (1994). https://doi. org/10.1161/01.STR.25.1.40 6. Dhand, S., O’Connor, P., Hughes, C., Lin, S.P.: Acute ischemic stroke: acute management and selection for endovascular therapy. Semin. Interv. Radiol. 37(02), 109–118 (2020). https://doi. org/10.1055/s-0040-1709152 7. Girotra, T., Lekoubou, A., Bishu, K.G., Ovbiagele, B.: A contemporary and comprehensive analysis of the costs of stroke in the United States. J. Neurol. Sci. 410, 116643 (2020). https:// doi.org/10.1016/j.jns.2019.116643 8. Gleeson, J.P., Murphy, T.B., O’Brien, J.D., O’Sullivan, D.J.P.: A population-level SEIR model for covid-19 scenarios (updated). In: Irish Epidemiological Modelling Advisory Group to NPHET - Technical Notes (2020). https://www.gov.ie/en/publication/dc5711-irishepidemiology-modelling-advisory-group-to-nphet-technical-notes/. Accessed 13 May 2021 9. Gumbinger, C., et al.: Time to treatment with recombinant tissue plasminogen activator and outcome of stroke in clinical practice: retrospective analysis of hospital quality assurance data with comparison with results from randomised clinical trials. BMJ 348 (2014). https://doi.org/ 10.1136/bmj.g3429. BMJ Publishing Group Ltd 10. Hoyer, C., et al.: Acute stroke in times of the COVID-19 pandemic: a multicenter study. Stroke 51(7), 2224–2227 (2020). https://doi.org/10.1161/STROKEAHA.120.030395 11. Hunter, E., Kelleher, J.D.: A framework for validating and testing agent-based models: a case study from infectious diseases modelling. In: 34th annual European Simulation and Modelling Conference (2020). https://doi.org/10.21427/2xjb-cq79 12. Hunter, E., Mac Namee, B., Kelleher, J.D.: A taxonomy for agent-based models in human infectious disease epidemiology. J. Artif. Soc. Soc. Simul. 20(3), 2 (2017). https://doi.org/10. 18564/jasss.3414, http://jasss.soc.surrey.ac.uk/20/3/2.html 13. Johnson, C.O., et al.: Global, regional, and national burden of stroke, 1990–2016: a systematic analysis for the global burden of disease study 2016. Lancet Neurol. 18(5), 439–458 (2019). https://doi.org/10.1016/S1474-4422(19)30034-1. Elsevier 14. Joo, H., George, M.G., Fang, J., Wang, G.: A literature review of indirect costs associated with stroke. J. Stroke Cereb.Vascular Dis.: Off. J. Natl. Stroke Assoc. 23(7), 1753–1763 (2014). https://doi.org/10.1016/j.jstrokecerebrovasdis.2014.02.017 15. Khosravani, H., et al.: Protected code stroke: hyperacute stroke management during the coronavirus disease 2019 (COVID-19) pandemic. Stroke 51(6) (2020). https://doi.org/10.1161/ STROKEAHA.120.029838 16. Lees, K.R., et al.: Time to treatment with intravenous alteplase and outcome in stroke: an updated pooled analysis of ECASS, ATLANTIS, NINDS, and EPITHET trials. Lancet (London, England) 375(9727), 1695–1703 (2014). https://doi.org/10.1016/S0140-6736(10)604916 17. Liu, R., Zhao, J., Fisher, M.: The global impact of COVID’19 on acute stroke care. CNS Neurosci. Ther. 26(10), 1103–1105 (2020). https://doi.org/10.1111/cns.13442, https:// onlinelibrary.wiley.com/doi/10.1111/cns.13442 18. Mandelzweig, L., Goldbourt, U., Boyko, V., Tanne, D.: Perceptual, social, and behavioral factors associated with delays in seeking medical care in patients with symptoms of acute stroke. Stroke 37(5), 1248–1253 (2006). https://doi.org/10.1161/01.STR.0000217200.61167. 39. American Heart Association 19. Matsuo, R., et al.: Association between onset-to-door time and clinical outcomes after ischemic stroke. Stroke 48(11), 3049–3056 (2017). https://doi.org/10.1161/STROKEAHA.117.018132 20. McCormack, R.C.J.: National Stroke Register Report 2018. National Clinical Programme for Stroke (NCPS) (2018). https://www.hse.ie/eng/about/who/cspd/ncps/stroke/resources/2018national-stroke-register-report.pdf. Accessed 13 May 2021
Simulating Delay in Seeking Treatment for Stroke Due to COVID-19 …
391
21. McGurgan, I.J., et al.: Acute intracerebral haemorrhage: diagnosis and management. Pract. Neurol. 21(2) (2021). https://doi.org/10.1136/practneurol-2020-002763. BMJ Publishing Group Ltd 22. Parry-Jones, A.R., et al.: An intracerebral hemorrhage care bundle is associated with lower case fatality. Ann. Neurol. 86(4), 495–503 (2019). https://doi.org/10.1002/ana.25546 23. Rajsic, S., et al.: Economic burden of stroke: a systematic review on post-stroke care. Eur. J. Health Econ. 20(1), 107–134 (2019). https://doi.org/10.1007/s10198-018-0984-0 24. Sacco, R.L., et al.: An updated definition of stroke for the 21st century: a statement for healthcare professionals from the American Heart Association/American Stroke Association. Stroke 44(7), 2064–2089 (2013). https://doi.org/10.1161/STR.0b013e318296aeca 25. Wein, T.H., et al.: Activation of emergency medical services for acute stroke in a nonurban population. Stroke 31(8), 1925–1928 (2000). https://doi.org/10.1161/01.STR.31.8.1925. American Heart Association 26. Woolf, S.H., et al.: Excess deaths from COVID-19 and other causes, March-July 2020. JAMA 324(15), 1562–1564 (2020). https://doi.org/10.1001/jama.2020.19545
Modelling Energy Security: The Case of Dutch Urban Energy Communities Javanshir Fouladvand, Deline Verkerk, Igor Nikolic, and Amineh Ghorbani
Abstract Energy communities are gaining momentum in the context of the energy transition. Given the distributed and collective action nature of energy communities, energy security of these local energy systems is more than just security of supply and related to issues such as affordability and acceptability of energy to members of the community. We build an agent-based model of energy communities to explore their security challenges. The security dimensions we consider are availability, affordability, accessibility and acceptability, which are referred to as the 4As. The results confirmed that there is always a trade-off between all four dimensions and that although it is difficult to achieve a high energy security performance, it is feasible. Results also showed that among factors influencing energy security, the investment of the community plays the biggest role. Keywords Energy security · Energy community · Renewable energy technologies · Agent-based modelling and simulation (ABMS)
1 Introduction The biggest potential to reduce greenhouse gasses emissions lies in the energy sector [1]. In this line, shifting from centralized energy systems to decentralized renewable energy technologies (RETs), is expected to fundamentally contribute to the goals of energy transition [2]. Therefore, local community initiatives namely energy communities, as one of the possible approaches to enlarge the share of local RETs, are gaining momentum [3]. Community energy systems (CES), contribute to the local generation, distribution and consumption of RETs [4]. Although there are different definitions of CES in J. Fouladvand (B) · D. Verkerk · I. Nikolic · A. Ghorbani Technology, Policy and Management Faculty, Delft University of Technology (TU Delft), Delft, The Netherlands e-mail: [email protected] D. Verkerk Institute of Environmental Sciences (CML), Leiden University, Leiden, The Netherlands © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_30
393
394
J. Fouladvand et al.
the literature, a CES can be defined as, “people in a neighbourhood, who invest in RETs jointly and generate the energy they consume.” [5]. This definition and other ones in literature (e.g. [6]), all emphasize on collective action of individuals in decision-making processes and actions within CES [7]. A crucial topic to consider for CES, is energy security of these energy system [8]. Energy security is a complex concept [9], and various disciplines such as public policy, economics, and engineering contribute to its definition [10]. There are more than 45 definitions of energy security in the literature [10]. For instance, Asia Pacific Energy Research Center (APERC) definition is: “The ability of an economy to guarantee the availability of energy resource supply in a sustainable and timely manner with the energy price being at a level that will not adversely affect the economic performance of the economy” [10]. All of these definitions mainly consider conventional energy systems, namely centralized, fossil-fuel-based and national energy systems [10]. However, these definitions are yet to be explored at the community level, to match the unique characteristics of CES such as being based on collective and distributed renewable energy generation. Thus, in this study we explore energy security of CES, using Agent-Based Modelling (ABM), given the bottom-up, collective nature of these energy systems. Although there are already many existing models of CES (e.g. [4, 11–13]), none have addressed the security of these systems, and as a matter of fact energy security in general. The goal of the model is to explore the impact of various parameters on CES’s energy security. The ABM is developed based on the 4As energy security concept [10].
2 4A’s Energy Security Concept Among many definitions of energy security one of the best known and frequently used definitions is the 4As concept proposed by the APERC: availability, accessibility, affordability and acceptability [14]. The 4As definition provides room to capture the collective nature and decentralized characteristic of CES and is therefore selected as the core definition of energy security for this modelling exercise. Availability is about the physical existence of the energy resources to be used for the energy system [9]. An indicator to measure availability is the domestic energy generation per capita of an energy system (either by fossil or renewable energy) [14]. Another indicator is the shortage percentage, which occur when there is a mismatch in demand–supply and individuals are therefore disconnected from energy supplies [15]. Affordability is related to the costs of the energy system and whether it is affordable or not [14]. Among different indicators of affordability, energy price is the most common [9]. Size of investments made in order to improve energy security [9], is also another affordability indicator in the literature. Accessibility can be defined as having sufficient access to commercial energy to promote an equal society [14]. Diversification of energy resources is a popular
Modelling Energy Security: The Case of Dutch …
395
indicator to increase and measure accessibility [9]. Diversity indexes provide a means of quantifying the diversity in energy supply, in order to eliminate supply risks [9]. Multiple integrated diversity indicators are presented in the literature, such as Shannon index [9]. Acceptability refers to the social opinion and public support towards energy sources [9]. This is often linked to societal elements such as welfare, fairness and environmental issues [16]. Although APERC uses an economy’s effort to switch away from carbon intensive fuels as an indicator for acceptability [14], carbon content and the CO2 emission of an energy system as whole are also suggested as indicators for acceptability [9].
3 Research Methods and Data 3.1 Agent-Based Modelling (ABM) In the limited literature of energy security of CES, optimization is the main approach (e.g. [17]). These studies do not capture the complexities and trade-offs of decisionmaking processes with regards to energy security. However, as systems are based on collective action of individuals who have different motivations and criteria to make decisions, it is an important aspect to be studied. ABM provides the opportunity to capture these individual behavioral choices and their collective action. ABM also provides the ability to add the time variable, which allows to examine different energy security scenarios. This is important, as individual decisions, the trade-offs related to energy security, and the ability to adopt and learn from each other towards collective energy generation, influence each four dimensions of energy security of CES. The developed ABM in this study is described in details in Sect. 4, using ODD protocol [18].
3.2 Parameterizing Using Dutch Data Data from Netherlands Environmental Assessment Agency (PBL) and Statistics Netherlands (CBS) are used to parameterize the model. To model the decisionmaking processes of agents, data from the survey among 599 Dutch citizens about their motivations for joining CES [19] is used which will be further explained in Sects. 4.2. and 4.3. Furthermore, the one-factor at a time (OFAT) approach is used to analyze the sensitivity of model outcomes to various parameter inputs.
396
J. Fouladvand et al.
4 Model Conceptualization and Implementation 4.1 Modelling Purpose The purpose of the model is to explore energy security of CES, as collective and distributed RETs.1 This is done by investigating the impact of various parameters (see Sect. 4.7.) on energy security of such energy systems.
4.2 Entities and State Variables Households are the only agents in the model. They use the national electricity grid and natural gas before joining a CES. We assume that these agents are in one neighborhood and have already decided to join a CES at the start of the simulation. The attributes of the households are energy demand, budget and internal motivations (that change during the simulation based on their network). Following [5, 19], the motivations taken into account are energy independence, trust, environmental concern and economic benefits, each having a value between 0 to 10 (0 weakest, 10 strongest). Being a member of CES, the households have three energy choices, namely, (i) collective renewable energy (RE) system (ii) individual RE system, and (iii) national grid. The latter two are selected by individuals, in addition, if the energy provided collectively does not meet their individual demands.
4.3 Interactions, Network and Adaptation The households are connected using a small world network commonly used in the context of CES (e.g. [11, 20]). In each tick (representing a month), a random agent interacts with one of the other agents in its social network and is influenced by it. If the agent’s motivations (i.e. energy independence, trust, environmental concern and economic benefits) are between 2 and 8 (i.e., the values are not extreme and not hard to change [11, 21]), they will be updated leaning one value towards the interacting neighbour’s opinion, this being for better or for worse. This form of social interactions is used at the beginning of each simulation step to update the motivations for each agent. These connections eventually lead to the whole community making a decision about their CES.
1
The model is available in CoMSES Net: https://www.comses.net/codebase-release/53329335a5cc-48c3-bfe6-f19dad2f8694/
Modelling Energy Security: The Case of Dutch …
397
4.4 Model Initialization and Narrative Before the initiation of CES, the household agents used natural gas and national electricity grid to cover their demand. In order to make the decision on different sources of energy (i.e. collective RETs, individual RETs, national grid) for the CES, the households first go through a period of opinion exchange, which means connected individual households learn more about their neighbours’ motivations and possibly grow more towards each other. This is based on social interactions that are presented in Sect. 4.3. After the period of opinion exchange, agents have three decisions to make, namely: (i) Choosing the percentage of RE that they want to generate collectively together, (ii) Choosing an additional individual option, in case the collective RE generation does not fully cover the demand, and (iii) After the technology reaches its lifetime, involving new participants and deciding on continuing participation and new CES. First, the households make a decision on choosing how much collective RE they want to generate which may not always cover all the needed demand collectively, i.e., the households choose an amount of RE (for this study solar photovoltaic (PV) and ground-source heat pump, see Sect. 4.5.) that covers a fraction between 10% and 100% of community demand. More environmental friendly households choose higher collective RE generation. The constrain, however, is in the initial investment, as higher collective RE generation, needs higher investment. Each agent will make a decision about its preferred collective RE, and the amount which is chosen the most among agents, is the one for the whole community. When the chosen collective energy generation doesn’t fully cover all the community demand, the households depending on their individual motivations, have three options: (i) choosing individual RETs, (ii) compensate their energy demand (i.e. lowering the demand and facing discomfort), or (ii) import energy from the grid (i.e. continue to consume natural gas and national electricity grid). The money which is saved due to lowering the demand (as households consume less, they pay less), will be saved overtime and will be invested on individual RETs. This is the option that most environmental friendly agents, who do not have the required budget yet, choose. Every year (12 ticks in the simulation), the community checks (i) whether they have reached the end of their project time-horizon (i.e. 55 years), (ii) whether the technologies in place have reached their lifetime. If the technologies have reached their lifetime, the community will start another information exchange period including new members (i.e. new households who moved to the neighbourhood) and making decision on choosing a new configuration (i.e. 10% - 100% collective energy). As the new households have their own motivations, energy demand and investment, the new collective energy generation might be different. When the community chooses the new amount, the households who have a different preference over the new amount, leave the community system, which means they are disconnected from the CES (i.e. they connect fully to national grid or get their energy demand elsewhere). Figure 1 presents the model conceptual flowchart.
398
J. Fouladvand et al.
Fig. 1 Model conceptual flowchart
4.5 Technical Assumptions and Model Inputs The technological option that households can choose for both collective and for individual energy systems, are solar photovoltaic (PV) and ground-source heat pumps. Two reasons account for this selection: solar PV is a mature technology and it is the main technology that the majority of current CES are using [22]. Heat pumps will be used for the reason that they offer a good combination with solar PV, to prepare for the transition towards an electricity based heating systems [23]. Households have three available energy options: (i) national grid, (ii) collective solar PV and heat pump, (iii) individual solar PV and heat pump. Table 1 presents the parameters related to these technologies. Overall, the efficiency of the system in this study is considered 0.85% [24], carbon emissions assumed as 0.46 (kg/kWh) [25], technologies’ costs are based on [26, 27], and average available sun radiation for the Netherlands is 4.38 (hours/day).
4.6 4As as Key Performance Indicators (KPIs) Availability: Average voluntary discomfort percentage: To assess availability, a measure is used that indicates to what extent the energy is available to meet the demand of each agent [31], which for our modelling exercise is translated as Eq. 1: Availabilit y = 100% − average voluntar y discom f or t per centage
(1)
To calculate average voluntary discomfort, considering the current demand, the percentage of collective and individual RE generation in CES (i.e. total RE), the baseline, and the average willingness to compensate (i.e. the average percentage of
Modelling Energy Security: The Case of Dutch … Table 1 Model’s input
399
Input
Value (unit)
References
Number of households in a neighbourhood
500 n
[28]
Interacting connections per household
13 n
[13]
Electricity price
0.20 (e/kWh)
[29]
Duration of information exchange perioda
7 (months)
Project time-horizona
55 (years)
[30]
Minimum investment sizea
1 (kw)
[30]
Baseline energy (always be covered)a , b
10 (%)
% new householdsa
20 (%)
a For
the value of this assumption, OFAT sensitivity analysis is performed, as it is not the focus of the modelling exercise b The energy demand which is crucial to be always provided and never will be compensated
the all agents are willing to avoid using national grid, see Sect. 4.7.), are subtracted (Eq. 2). Average voluntar y shoratge per centage(%) = (100% − total R E(%) −baseline energy(%) − average williwgness to compensate(%))
(2)
Affordability: Average costs: To assess affordability, a measure is used that calculates the total system costs per agent [31], which is implemented as (Eq. 3): Average costs (e) =
I nvestment costs scenario(e)+ Costs energy impor t (e) + I nvestment new communit y member s (e) Partici pating households (3)
Accessibility: Diversity index: Based on Shannon index [31], diversification is used to measure the accessibility of a CES as presented in Eq. 4. Diver sit yindex = −1 ∗ ((choosen.collective R E ∗ ln choosen.collective R E) +(choosen.individual R E ∗ ln choosen.individual R E)+ (choosen.nationalgrid ∗ ln choosen.nationalgrid)) (4)
400
J. Fouladvand et al.
Acceptability: CO2 reduction per household: As acceptability is linked to environmental issues and reducing CO2 emissions of the energy sector [9], to assess acceptability, CO2 reduction is measured in the model as presented in Eq. 5: Car bon r eduction(kg C O2) = Car bon emission o f the traditional energy sysytem (kg C O2)− Car bon emission o f the communit y energy system (kg C O2) Partici pating households
(5)
4.7 Model Parameters and Experimental Setup To explore the energy security of CES, four parameters are selected from the literature that are potentially influential for energy security: • Demand of the households: Since one of the primary motivations of CES is to generate energy to meet the local demand [7], energy demand is important for a CES. Following [16, 32], we hypothesize that lowering the energy demand helps to enhance energy availability and therefore energy security. • Budget of households: Investment-size plays a large role in CES [5]. At the same time higher investments can play a major role in increasing availability and affordability and therefore security of an energy system [9]. • Energy prices: Rising energy prices is argued as an effective strategy to lower energy consumption and an opportunity for deployment of CES in the literature [22]. In the energy security literature, it is argued that higher energy costs, result in lower affordability and therefore lower energy security [16, 32]. • Willingness to compensate over use of energy grid: According to the participatory value evaluation theory, people are willing to accept changes in the provision of public goods [33]. Willingness to compensate has also been explored in the energy security literature as important for the 4As’ dimensions [33]. We use these four parameters, as input to our modelling exercise. Using data from PBL, the average households demand and natural gas price were extracted. The experimentation include a total number of 108 different combinations of settings for the four parameters (4*3*3*3 = 108), as shown in Table 2. Each combination was repeated 100 times hence, the experimentation resulted in a total number of 10,800 runs.
Modelling Energy Security: The Case of Dutch … Table 2 Experimental settings
401
Model parameter
Value
Each household demand (kWh/year)
8185, 15,161, 22,622, 30,084
Natural gas price (e/kWh)
0.09, 0.12, 0.15
Willingness to compensate (%)
10, 20, 30
Budgets/ Investment-size (e)
2500, 5000, 7500
5 Results 5.1 Overview of Each KPI Individually In this stage results for the final end-state of each run (i.e. in 55 years) for each KPI are presented separately. In Fig. 2 the results are categorized into the three categories: • Best results: the best 10% of runs for each specific KPI (green colour); • Worst results: the worst 10% of all runs for each specific KPI (red colour); • Others: remaining 80% of the runs (grey colour) KPI 1: Average voluntary discomfort percentage: The simulation results for average voluntary discomfort (shortage) percentage are always less than 20%. Only 10% of the runs have a discomfort percentage higher than 9%. These runs include communities with the most environmental friendly behavior but not financially strong enough to have 100% CES. Therefore, for the demand that they do not meet with
Fig. 2 Overview of KPIs
402
J. Fouladvand et al.
the collective energy, they voluntarily chose discomfort, instead of the national grid. There is a large peak in the 0% discomfort, which is mostly for runs that chose 100% CES. These communities are also the most environmental friendly communities, but with strong financial resources. The majority of the simulation runs, however, are in the middle range of the discomfort percentage, between 4 and 9%. Lower demand, higher energy generation and higher energy import lead to best performance of this KPI. While higher budget showed positive influence, natural gas price and compensation were not impactful. KPI 2: Average costs: Average costs are calculated for each household, based on the cost of community in its life time (i.e. 55 years) divided by number of households. As Fig. 2 illustrates, the majority of runs have low costs. Considering the assumptions related to current and future energy prices, 75% of all runs have better economic performance than using only grid. This means individual households who participate in a CES, spend less money over 55 years on their energy bills. All the communities with the lowest costs, are communities with lowest demand. However, this does not necessarily mean that they have higher investment as they have various investment sizes. Higher import dependence (higher energy import form outside of system boundaries) usually is more likely to lead to lower cost. Natural gas price, willingness to compensate and energy generation did not show a meaningful influence on KPI 2. Also, environmental friendly agents are distributed within all of the communities, however, their population is more condense within communities on average and lower costs. KPI 3: Diversity index: This is an indicator to measure the diversity of energy sources in a CES. There is a peak at 0 which shows the dominance of a specific energy source, e.g. collective Solar PV. These communities are communities which choose 100% collective energy and they also have low energy demand. The majority of the runs, however, have a diversity index between 0.6 and 0.9, which means they have both RE (with different generation capacity 10–100%) and natural gas as their energy source. The runs with diversity index higher than 0.9 have various parameters settings (see Sect. 4.5.), but the high willingness to compensate is high among them. KPI 4: Carbon reduction index: The carbon reduction index measures the average CO2 reduction of each CES participant through its life time (i.e. 55 years). As the communities at least have to choose 10% RE generation, the carbon reduction is always more than 0, see Fig. 2. The best performance for this indicator is for communities with CO2 reduction higher than 130.000 kg, which mostly have high budgets and environmental friendly motivation. However, they have various demands, difference in natural gas prices and different “willingness to compensate” values. The communities with lowest CO2 reduction have the lowest budget.
Modelling Energy Security: The Case of Dutch …
403
Fig. 3 Parameters for most and least successful energy security performances
5.2 Most and Least Successful Energy Security Performances Based on All 4 KPIs In this part, the communities with the best and worst overall performances are analyzed. The procedure to define these energy security performances is as follow: • Most successful performances: From the 10,800 model runs, for each KPI the 50% best performances are extracted separately. This gives us for each KPI, 5400 runs that have performed the best. Within these four sets of 5400 runs, the overlapping runs are selected, which are only 197 model runs in total. These 197 runs are the runs that have the best performances for all the KPIs. • Unsuccessful performances: Through the same process, 50% of worst performances are selected separately for each KPI and then the overlaps are extracted leading to 458 runs. Consequently, the values of the four parameters for the most successful and least successful were more closely studied. For the budget and willingness to compensate, a clear division was identified between the successful and unsuccessful communities. The 197 successful runs are dominated by highest budget and average willingness to compensate. On the other hand, unsuccessful performances have the lowest budgets and lowest willingness to compensate. Natural gas price varies for both successful and unsuccessful energy security performances. However, successful performances do not have the highest natural gas price. Figure 3, illustrates these findings.
6 Discussion We used the 4A’s energy security concept [14] to conceptualize energy security of community energy systems (CES). Considering one KPI at a time, CES are able
404
J. Fouladvand et al.
to perform well for each one. Specifically, 10% of CES had 0% discomfort (as an indicator for availability), and on average all CES reduced their CO2 emission by 35% (as an indicator for acceptability). For the average cost of households (as an indicator for affordability) CES also performed considerably well. On average the costs per household is around e45,000 over 55 years, which is less in comparison with current energy prices. Considering the initial investment-size (see Table 2), this shows in overall that CES are economically feasible under the suggested parameter settings of this research. There are still communities with average cost of e70,000 per household, which highlights the economic challenges as studies such as [34] also mentioned. Diversity (as an indicator for accessibility) showed various values between 0 and 1. The runs with 0 value in diversity, are the communities with 100% collective RE generation (and not 100% individual or 100% national grid). The runs that used all three possible energy resources (i.e. collective RE, individual RE and the national grid), are the ones with relatively high performance in diversity index, which shows these communities have agents with different motivations (see Sects. 4.2. and 4.3.). However, energy security is a multi-dimensional concept, which means that all the dimensions should be considered and analyzed simultaneously. In order to draw the whole picture and provide an analysis of four dimensions together, we analyzed the communities with successful and unsuccessful energy security performances. Our analysis delineated that there are always trade-offs between the four dimensions, as among 10,800 runs only 197 (less than 2% of all runs) have a performance that is considered successful in all four KPIs (i.e., > 50%). On the other hand, the portion of unsuccessful performances (i.e. < 50% in all four KPIs) are two times higher (458 runs out of 10800, 4.2% of total runs). Although it is rare to have high performance for all four dimensions at the same time, these successful performances showed that it is feasible to reduce CO2 emission while not facing any discomfort and financial consequences. In order to analyze the four input parameters (i.e. demand, investment-size, willingness to compensate and prices), comparison between successful and unsuccessful energy security performances (i.e. four KPIs together) was performed. This comparison indicates which parameter leads to a better performance. The only parameter which explicitly indicated an impact on successful vs. unsuccessful performances is the budget. The successful performances have the highest budget (e7500) and unsuccessful ones are dominated by the lowest budget (e2500). Willingness to compensate and demand, however, do not indicate a strong impact on success performance. For instance, the lowest demand (i.e. 15,161 kWh) is the dominating demand parameter value among the unsuccessful performances. This is in contrast to the current body of literature which argues less demand leads to a better performance in energy security [16, 32]. Lastly, natural gas prices did not show considerable influence on energy security of CES as the unsuccessful performances have the full range of natural gas prices and the successful ones have 0.09 and 0.12 e/kWh.
Modelling Energy Security: The Case of Dutch …
405
7 Conclusion and Further Work As key elements of energy transition at the local level, the CES’s body of literature is growing rapidly. Yet, little attention is given to energy security of CES and the need to understand what energy security actually implies for them is becoming more vivid. Therefore, this research aimed to study energy security of CES through an agent-based modelling approach, using the 4A’s energy security concept [14]. The results showed that each KPI can be individually high in performance in a CES, specifically shortage percentage and CO2 emissions reduction. However, it is hard to reach a community state in which all four dimensions are satisfied. This highlights the difficulty, but still the feasibility to achieve a high energy security performance for CES. Among the four parameters, only the budget seemed influential. Willingness to compensate, demand ranges and energy prices did not show considerable influence. These can be translated to policy recommendations for Dutch policy-makers as: • The budget is the most important consideration for establishing secured CES, as it can be a constrain for environmental friendly households and a concern for economically driven households. Therefore, providing more support (e.g. subsidies and loans) is effective and essential. • Energy demand is not the most influential consideration for energy security of collective energy systems. Therefore, other policies and strategies such as RE subsidies could potentially have more impact on collective energy security, than energy demand reduction policies. Nevertheless, the households with relatively high energy demand need to reduce their demand, in order to contribute in longterm security and environmental targets [35]. • The current PBL energy price scenario (0.12 e/kWh) is a successful scenario, as higher energy prices do not lead to successful performances and no significant influence of energy prices were identified. Although the current study sheds lights on the energy security of CES, it is still more of a conceptual model. There are certain limitations, such as the chosen energy security concept, indicators and technologies within the model. Also, using theories such as social value orientation theory and planned behavior could have led to insights regarding the influence of households’ motivations on energy security. Acknowledgments The authors would like to thank the Netherlands Organization for Scientific Research for their financial support [NWO Responsible Innovation grant – 313-99-324]. In addition, the support of Paulien Herder and Niek Mouter for this study was highly appreciated.
References 1. Masson-Delmotte, V., Portner, H.O., Roberts, D.: IPCC Global warming of 1.5 C, no. 9. (2018). https://www.ipcc.ch/site/assets/uploads/sites/2/2019/06/SR15_Full_Report_Low_Res.pdf
406
J. Fouladvand et al.
2. Kaundinya, D.P., Balachandra, P., Ravindranath, N.H.: Grid-connected versus stand-alone energy systems for decentralized power-A review of literature. Renew. Sustain. Energy Rev. 13(8), 2041–2050 (2009). https://doi.org/10.1016/j.rser.2009.02.002 3. Van Der Schoor, T., Scholtens, B.: Power to the people: Local community initiatives and the transition to sustainable energy. Renew. Sustain. Energy Rev. (2015). https://doi.org/10.1016/ j.rser.2014.10.089 4. Fouladvand, J., Mouter, N., Ghorbani, A., Herder, P.: Formation and Continuation of Thermal Energy Community Systems: An Explorative Agent-Based Model for the Netherlands. https:// doi.org/10.3390/en13112829 5. Dóci, G., Vasileiadou, E.: ‘Let’s do it ourselves’ Individual motivations for investing in renewables at community level. Renew. Sustain. Energy Rev. 49, 41–50 (2015). https://doi.org/10. 1016/j.rser.2015.04.051 6. Walker, G., Devine-Wright, P.: Community renewable energy: what should it mean? Energy Policy 36(2), 497–500 (2008). https://doi.org/10.1016/j.enpol.2007.10.019 7. Dóci, G., Vasileiadou, E., Petersen, A.C.: Exploring the transition potential of renewable energy communities. Futures 66, 85–95 (2015). https://doi.org/10.1016/j.futures.2015.01.002 8. Fulhu, M., Mohamed, M., Krumdieck, S.: Voluntary demand participation (VDP) for security of essential energy activities in remote communities with case study in Maldives, Energy Sustain. Dev. 49, 27–38 (2019). https://doi.org/10.1016/j.esd.2019.01.002 9. Kruyt, B., van Vuuren, D.P., de Vries, H.J.M., Groenenberg, H.: Indicators for energy security. Energy Policy (2009). https://doi.org/10.1016/j.enpol.2009.02.006 10. Sovacool, B.K.: Introduction: Defining, measuring, and exploring energy security, in The Routledge handbook of energy security, Routledge, pp. 19–60 (2010) 11. Ghorbani, A., Nascimento, L., Filatova, T.: Energy research & Social Science Growing community energy initiatives from the bottom up : simulating the role of behavioural attitudes and leadership in the Netherlands. Energy Res. Soc. Sci. 70, 101782 (2020), March. https://doi. org/10.1016/j.erss.2020.101782 12. Busch, J., Roelich, K., Bale, C. S. E., Knoeri, C.: Scaling up local energy infrastructure; An agent-based model of the emergence of district heating networks. Energy Policy 100, 170–180 (2017), October 2016. https://doi.org/10.1016/j.enpol.2016.10.011 13. Mittal, A., Krejci, C.C., Dorneich, M. C., Fickes, D.: An agent-based approach to modeling zero energy communities. Sol. Energy 191, 193–204 (2019), December 2018. https://doi.org/ 10.1016/j.solener.2019.08.040 14. Tongsopit, S., Kittner, N., Chang, Y., Aksornkij, A., Wangjiraniran, W.: Energy security in ASEAN: a quantitative approach for sustainable energy policy. Energy Policy (2016). https:// doi.org/10.1016/j.enpol.2015.11.019 15. Reichl, J., Schmidthaler, M., Schneider, F.: The value of supply security: the costs of power outages to Austrian households, firms and the public sector ✩. Energy Econ. 36, 256–261 (2013). https://doi.org/10.1016/j.eneco.2012.08.044 16. Ang, B.W., Choong, W.L., Ng, T.S.: Energy security: definitions, dimensions and indexes. Renew. Sustain. Energy Rev. (2015). https://doi.org/10.1016/j.rser.2014.10.064 17. Wang, Z., Perera, A.T.D.: Robust optimization of power grid with distributed generation and improved reliability. Energy Procedia 159, 400–405 (2019). https://doi.org/10.1016/j.egypro. 2018.12.069 18. Grimm, V. et al.: The ODD protocol for describing agent-based and other simulation models: A second update to improve clarity, replication, and structural realism. Jasss 23(2) (2020). https:// doi.org/10.18564/jasss.4259 19. Koirala, B.P., Araghi, Y., Kroesen, M., Ghorbani, A., Hakvoort, R.A., Herder, P.M.: Trust, awareness, and independence: insights from a socio-psychological factor analysis of citizen knowledge and participation in community energy systems. Energy Res. Soc. Sci. 38(January), 33–40 (2018). https://doi.org/10.1016/j.erss.2018.01.009 20. Jung, M., Hwang, J.: Structural dynamics of innovation networks funded by the European Union in the context of systemic innovation of the renewable energy sector. Energy Policy 96, 471–490 (2016). https://doi.org/10.1016/j.enpol.2016.06.017
Modelling Energy Security: The Case of Dutch …
407
21. Fouladvand, J., Rojas, M.A., Hoppe, T. and Ghorbani, A., 2022. Simulating thermal energy community formation: Institutional enablers outplaying technological choice. Appl. Energy 306 117897. https://doi.org/10.1016/j.apenergy.2021.117897 22. Seyfang, G., Jin, J., Smith, A.: A thousand flowers blooming? an examination of community energy in the UK. Energy Policy 61, 977–989 (2013). https://doi.org/10.1016/j.enpol.2013. 06.030 23. Staffell, I., Brett, D., Brandon, N., Hawkes, A.: A review of domestic heat pumps. Energy Environ. Sci. 5(11), 9291–9306 (2012). https://doi.org/10.1039/c2ee22653g 24. V. N. V. A. Report.: Fossil-free within one generation (2019). https://group.vattenfall.com/nl/ siteassets/vattenfall-nl-site-assets/wie-we-zijn/corp-governance/annual-reports/vattenfall-nvannual-report-2019.pdf 25. Gerdes, J., Segers, R.: Fossiel energiegebruik en het rendement van elektriciteit in Nederland, September (2012). https://www.rvo.nl/sites/default/files/Notitie%20Energie-CO2%20e ffecten%20elektriciteit%20Sept%202012.pdf 26. Solar PV cost update Department of Energy & Climate Change Solar PV cost update, May (2012). https://assets.publishing.service.gov.uk/government/uploads/system/uploads/att achment_data/file/43083/5381-solar-pv-cost-update.pdf 27. U. Kingdom: Heat Pump Implementation Scenarios until 2030 heat pump implementation scenarios until 2030 an analysis of the technology’s potential in the building. https://www.ehpa.org/fileadmin/red/03._Media/03.02_Studies_and_reports/Heat_P ump_Implementation_Scenarios.pdf 28. Sleutjes, B., De Valk, H.A.G., Ooijevaar, J.: The measurement of ethnic segregation in the Netherlands: differences between administrative and individualized neighbourhoods. Eur. J. Popul. 34(2), 195–224 (2018). https://doi.org/10.1007/s10680-018-9479-z 29. Average energy rates for consumers, p. 84672 (2021). https://opendata.cbs.nl/statline/?dl= 3350E%20#/CBS/nl/dataset/84672NED/table 30. Sandvall, A.F., Ahlgren, E.O., Ekvall, T.: Cost-efficiency of urban heating strategies— Modelling scale effects of low-energy building heat supply. Energy Strateg. Rev. 18, 212–223 (2017). https://doi.org/10.1016/j.esr.2017.10.003 31. Ranjan, A., Hughes, L.: Energy security and the diversity of energy flows in an energy system. Energy 73, 137–144 (2014). https://doi.org/10.1016/j.energy.2014.05.108 32. Ang, B.W., Choong, W.L., Ng, T.S.: A framework for evaluating Singapore’s energy security. Appl. Energy 148, 314–325 (2015). https://doi.org/10.1016/j.apenergy.2015.03.088 33. Radovanovi´c, M., Filipovi´c, S., Pavlovi´c, D.: Energy security measurement – A sustainable approach. Renew. Sustain. Energy Rev. 68, 1020–1032 (2017). https://doi.org/10.1016/j.rser. 2016.02.010 34. Londo, M., Matton, R., Usmani, O., Van Klaveren, M., Tigchelaar, C., Brunsting, S.: Alternatives for current net metering policy for solar PV in the Netherlands: a comparison of impacts on business case and purchasing behaviour of private homeowners, and on governmental costs. Renew. Energy 147, 903–915 (2020). https://doi.org/10.1016/j.renene.2019.09.062 35. Olonscheck, M., Walther, C., Lüdeke, M., Kropp, J.P.: Feasibility of energy reduction targets under climate change: the case of the residential heating energy sector of the Netherlands. Energy (2015). https://doi.org/10.1016/j.energy.2015.07.080
Towards Efficient Context-Sensitive Deliberation Maarten Jensen, Harko Verhagen, Loïs Vanhée, and Frank Dignum
Abstract We propose a context-sensitive deliberation framework where the decision context does not deliver an action straight away, but where rather the decision context and agent characteristics influence the type of deliberation and type of information evaluated which will affect the final decision. The framework is based on the Contextual Action Framework for Computational Agents (CAFCA). Our framework also tailors the deliberation type used to the decision context the agent finds itself in, starting from the least cognitive taxing deliberation types unless the context requires more complex deliberation types. As a proof-of-concept the paper shows how context and information relevance can be used to conceptually expand the deliberation system of an agent. Keywords Context · CAFCA · Deliberation method selection · Dynamic deliberation
1 Introduction In social simulation we have always striven for more realistic social agents. This is difficult as human behavior is complex and requires elaborate systems that can be inefficient in terms of performance. More realistic agents would allow for more accurate models of human behavior, however it remains a challenge to attain more M. Jensen (B) · L. Vanhée · F. Dignum Department of Computing Science, Umeå University, 901 87 Umeå, Sweden e-mail: [email protected] L. Vanhée e-mail: [email protected] F. Dignum e-mail: [email protected] H. Verhagen Department of Computer and Systems Sciences, Stockholm University, PO Box 7003, 16407 Kista, Sweden e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_31
409
410
M. Jensen et al.
realism. As sociological and psychological theories of human behavior keep evolving, social simulation models should adapt as well. In the past decades more models conceptualize human deliberation as not only a purely rational or irrational system, but rather as a more dynamic system with multiple ways of thinking. Kahneman for example proposes a model with two modes of thinking, either simple and quick thinking (system 1), or elaborate and slow thinking (system 2) [15]. Minsky takes this a step further and describes six levels of deliberation that work together to form a solution [16]. Another challenge when modeling is related to efficiency. The trade-off between many simple agents, where patterns emerge from many interactions, and fewer complex agents, where patterns emerge due to both internal deliberation as well as interactions, is of high importance in social simulations (see [6] for a good overview of this KIDS vs. KISS debate). As we have shown in [10], there is no winner for all cases. Going for many simple agents makes it often easier to replicate statistical findings, however with more complex agents we can often have a more in-depth analysis of what causes changes in behaviour and incorporate critical aspects that require a certain complexity. For example, the ASSOCC model [10] contains complex need-based agents with a rich social life. This allowed for detailed analysis of the effectiveness of new restrictions. However, this complexity came at the cost of a limit of about 2000 agents since exceeding this number slowed down the simulation significantly. Contrast this with a mathematical model such as the Oxford model [12] which runs one million agents, where on the other hand the agents only have a simple deliberation system and the effects of changes in the situation are not easily explained. A solution to this problem could be a dynamic deliberation system that uses efficient quick deliberation most of the time and more complex slow deliberation sometimes. The Consumat framework [13] is interesting as it tried to offer a solution for adapting the deliberation to the situation. This could also provide a basis for dynamically compromising between quality and computational costs of decisions, using a utility-driven form of metacognition [14]. It is one of the few frameworks that uses some sort of dynamic deliberation. The Consumat framework unfortunately lacks social concepts such as normative behavior (e.g., wanting to be part of a group) or theory of mind (i.e. thinking about intentions, needs, goals, etc. of other agents). The deliberation selection mechanism is rational, i.e. utility maximizing, rather than context dependent. Since dynamic deliberation can help attain both an increase in realism and large scale simulations with complex agents, we propose a context-sensitive deliberation framework spanning multiple types of deliberation for agents. We started from the architecture proposed by Kahneman [15] but extended this with the Contextual Action Framework for Computational Agents (CAFCA) [7], which has clearer analytical concepts than the Consumat [13] with a richer representation of social concepts. The context-sensitive deliberation framework does not directly deliver an action based on the decision context, but rather the decision context and agent characteristics influence the type of deliberation and type of information evaluated which will influence in turn the final decision. The aim of this framework is not to make a
Towards Efficient Context-Sensitive Deliberation
411
model of human cognition, but rather make a context dependent social agent inspired by work such as that of Kahneman [15] and Minsky [16]. In the following section we describe the definition of context from the literature and explains the CAFCA framework. The third section shows our model in detail, starting with the general model mechanism, followed by an explanation of the relevant information from the context per deliberation type and an indication how and when a switch is made to a different deliberation type. In the fourth section an example is shown that serves as proof-of-concept, it uses an existing smoking ban [3] simulation and applies our contextual framework to expand the agent deliberation. This section is followed by a discussion on our framework, the applications and a conclusion.
2 Related Work 2.1 Context The fact that context is important for the deliberation is not new. The relation between time scales and deliberation was posed by Newell and Card in [17]. Although they show there is some connection between types of deliberation and context, they do not show how an agent can dynamically connect context and deliberation. Similarly there is quite some work on defining aspects of contexts in Human-Computer Interaction. Notably [21] gives a very usable definition of contexts (see Fig. 1) when one considers the use of a software system in a context. The five context categorizations are not formalized, as they dependent on the domain and ‘context’ thus can be formalized very differently. To give a better understanding of the categories we give a couple of examples, which is by far not the full definition of the categorizations. Time can be a specific point in time, but also a period, it can relate to for example seconds, minutes, days (even working days or weekdays),
Fig. 1 Categories of context according to Zimmermann [21]
Re
lati o
e Tim
ns
Individuality
Entity
Ac
n
tio
tiv
ity
L
a oc
412
M. Jensen et al.
years, centuries, etc. The location can be a physical place, with variety in size, for example larger geographical, building, complex, town, region or country. The activity indicates what is done in the context, alone or together, grocery shopping, playing football, having dinner, in a formal meeting or non-formal. The relations include the aspects of the context related to other people, groups or institutes. It also includes theory of mind, that can for example relate to goals, intentions, social norms, values of other entities. The individuality contains the characteristics of an entity’s current interests and goals, value priorities, experience (is the situation known, clear which variables should be salient,…), and needs/motives. These are interesting definitions for context, however they do not aid in selecting a type of deliberation.
2.2 Contextual Action Framework for Computational Agents The Contextual Action Framework for Computational Agents (CAFCA) [7] was developed to categorize and incorporate different kinds of deliberation for different situations. It is developed specifically for social simulation purposes based on contextual human action and agent action. Figure 2 shows the 3 × 3 deliberation matrix. This matrix gives a broad categorization of agent deliberation methods. It consists of a social axis (horizontal) which has increased social deliberation when moving from left to right and a reasoning dimension (vertical). In the first column (Individual) the deliberation methods only consider the physical properties of other agents, they are just seen as obstacles or objects rather than social beings with their own behavior and goals. Moving to the second column (Social), theory of mind becomes important, this column is about working together or outsmarting other individuals e.g. game theory. Finally the third column (Collective) focuses on being part of a group and includes all of the aspects mentioned before in addition to group affiliation aspects. The reasoning dimension starts with the habitual layer which is the least cognitively taxing as it is about following habits or imitating behavior, without plan
Reasoning Dimension
Sociality Dimension Individual
Social
Collective
Habitual
Repetition
Imitation
Joining-in
Strategic
Rational choice
Game Theory
Team reasoning
Normative
(institutional) rules
(social) norms
(moral) values
Fig. 2 Adopted from [7], it shows the categorization of deliberation methods. In the original version of the matrix in [7] Habitual is named Automatic, the new label is introduced in [8]
Towards Efficient Context-Sensitive Deliberation
413
making or deeper deliberation. The strategic layer uses deliberation to form plans or theories to choose the best course of action, based on utility. The strategic layer will however see rules and norms as strict and not open to violation. E.g., an agent that wants to steal something will not deliberate about that in the strategic layer, but will have to move to the normative layer to deliberate about how bad it is to break the norm (to steal). It is at the normative layer where it is decided if it is and how important it is to follow the rules and norms. While this contextual framework is interesting, it does not provide the meta deliberation on selecting a deliberation type based on context. The closest formal model on context recognition influencing deliberation is discussed in [5], but it directly chooses a deliberation type based on context. The main fallacy that this approach seems to have is that it is assumed that we first determine the context we are in and subsequently determine the best fitting deliberation method. The problem is that there is no loop, influencing which context should be considered based on the deliberation method.
3 Modelling 3.1 Context-Dependent Deliberation Cycle The context-sensitive framework dynamically explores the decision context as follows, starting at cognitively less taxing deliberation types and context exploration, and adding more complexity until the decision problem is solved. The context exploration should happen dynamically based on the goal of the agent, the information from the context and the deliberation type used, while allowing for adaptation of each of these elements during deliberation. Our solution to achieve this is the following conceptual model (see Fig. 3). Deliberation for an agent usually (that is, unless there is an important event interrupting) starts with a minimal context (external environment) and a goal or current interest (internal state). The next step is deliberation in the CAFCA matrix using repetition (1.1). When this fails another deliberation type can be selected based on the reason why it failed. Did it fail because there was not enough information? Or did it fail because there was no pre-existing plan? Does the agent need help from others? With a different deliberation type selected this brings us back to the external and internal elements exploring the context based on the relevant information needed for this deliberation type. This deliberation process iterates, expanding the context dependent on the relevant cell, selecting cells based on the context and decision problems encountered, or even adjusted the goal of the agent if needed. The general direction of exploration in the matrix is from top-left (e.g. after repetition use rational choice or imitation) to the bottom-right (ending with ‘Moral’ values (3.3)). The further to the bottom-right the more complex the deliberation becomes, thus when solving
414
M. Jensen et al.
External Time and location Physical objects Other people Groups Etc.
Internal Goals Current interests Value priorities Experience Needs/motives Etc.
Deliberation Cycle
Individual
Social
Collective
Habitual
1.1 Repetition
1.2 Imitation
1.3 Joining-in
Strategic
2.1 Rational choice
2.2 Game Theory
2.3 Team reasoning
Normative
3.1 (institutional) rules
3.2 (social) norms
3.3 (moral) values
Fig. 3 Contextual deliberation cycle
decision problems with the simpler methods more often, the deliberation process is more efficient while retaining the capability of more complex deliberation.
3.2 Information Relevance and CAFCA Cell Transitioning To make the deliberation cycle (Fig. 3) more concrete we describe the information relevance and transitions for each of the cells in the CAFCA matrix. Figure 4 shows relevant information per cell, while Fig. 5 shows transitions among the different cells. Figure 4 shows the relevant information that is required from the context to make a decision using that type of deliberation. For example, in repetition (1.1) the agent only needs the accessible objects, people and actions currently performed as this is enough to perform a pre-made plan. If the plan fails different information is needed so a switch to another CAFCA cell is required. In for example the imitation cell (1.2) the agent is interested in other agent’s behavior and goals, beliefs, and intentions to determine if their behavior is relevant. By switching cells, the perspective on what is relevant to the decision context changes. The context is explored dependent on what is relevant for the goal and the current deliberation matrix, which creates a focused decision context tailored to the deliberation problem of the agent in the given situation.
Towards Efficient Context-Sensitive Deliberation Individual
415
Social
Collective
Theory of Mind: G, B, I Actions performed by relevant people Accessible objects, Accessible people, Actions currently performed
Theory of Group: G, B, I Expected action as team member ToM: G, B, I Actions performed by relevant people
Relevant people are those who have a similar goal to the DB. There is a minimal theory of mind.
The group considered is the group that the DB wants to join. The DB need information to perform actions to belong to the group.
Useful objects, useful people, Utility Accessible objects, Accessible people, Actions currently performed
ToM: Mental attitudes ToM: G, B, I Actions performed by relevant people, Utility Useful objects, useful people
ToG: Mental attitudes, roles Agents in my group ToM: Mental attitudes, Theory of Group: G, B, I Expected action as team member
The set of objects and people is extended to include also not directly accessible objects for plan making.
Relevant people are those who can aid or hinder the DB. Mental attitudes referes to the information needed to make an estimation of the actions that other agents will perform.
Related rules, Related laws, Useful objects, Useful people, Utility
Related social norms People's opinion towards those norms Related rules, Related laws, ToM: Mental attitudes
Accessible objects, Accessible people, Actions currently performed Habitual
Accessible means being accessible to the DB in the current context.
Strategic Normative
Rules and laws that are relevant for the current context
Social norms related to the current context. That may hinder or lead behavior of the DB.
The mental attitudes and roles are information needed for the DB to make decisions in the group. E.g. status, structure of team, mental models, roles (Moral) values of self, Theory of Mind: values, Theory of Group: values ToG: Mental attitudes, roles Agents in my group Related social norms People's opinion towards those norms Consider values of self, others, group.
Fig. 4 CAFCA information relevance. DB = deliberating agent, G = Goals, B = Beliefs, I = Intentions, ToM = Theory of Mind, ToG = Theory of Group
For readability purposes, we show relevant information of previous cells (those that are directly above or to the left) in gray. In practice cells more to the right or bottom can always contain the relevant information from preceding (horizontally and vertically) cells. For example in the ‘Moral’ values (3.3) the accessible objects and people from repetition (1.1) could still be relevant, but only when they are part of the explored context! Using this categorization makes it possible to focus on relevant parts of the context and build a context specifically for the decision problem at hand. Relevant information for each cell is also information that may hinder achieving the goal even when this is not directly indicated in our matrix. For example, at the strategic level in rational choice (1.2) the agent may consider stealing something. However, a conflict arises as there is a rule against stealing. To be aware of this such rules should be part of the decision context when they become relevant, even though rules are not explicitly mentioned in rational choice (1.2) but rather in ‘Institution-
416
M. Jensen et al. Individual
Social
Plan fails, based on missing information (need extra actions) Habitual
Group related goal or want to be part of group
Information missing (need extra actions), e.g. cultural/profession
Default action is not sufficient (fails to achieve the goal), still in group
No one to imitate Other people can have influence on your goals
Strategic
Other people can have influence on your goals
Complying to rules leads to undesirable outcomes
Collective
Having an opponent, counts both for an opposing individual but also an opposing group
Decide to join a team/group (common) goal
Complying to norms leads to undesirable outcomes
Team reasoning provides no solution
Normative
Social norms are in conflict Rules are in conflict
All information available, can solve all situations
Fig. 5 CAFCA cell transitions
alized’ rules (1.3). If a conflict with a rule arises the agent moves from the strategic to the normative layer, where rules are more explicitly part of the context since now they should be evaluated. Figure 5 shows triggers that can cause transitioning between the CAFCA cells. When the deliberation type cannot find a solution from the explored context either the context may be explored further or a different deliberation type may be considered. Dependent on the currently selected deliberation type and context, different transitions are possible. Ending in the ‘Moral’ values if the decision drags on. If the system transitions to a cell while the preconditions of that cell are not met, the system will directly move to the next or previous cell. This process is also dependent on the cell but is related to not meeting the preconditions of the cell. For example, after moving from imitation (1.2) to joining-in (1.3) because the agent wants to join a group, the agent may become aware he does not share the same goals as the group and moves backward to imitation.
Towards Efficient Context-Sensitive Deliberation
417
In case there are conflicts or when multiple cells may be applicable for transitioning towards, one could base the decision of transitioning on the characteristics of the agent. For example, there can be agents that move quicker to the social dimension to find the solution while other agents will move down (deeper) into the matrix to do more complex but individual deliberation straightaway. There could also be agents that do not even consider breaking the rules or norms, these agents would not even use the normative layer (with the exception of ‘Moral’ values (3.3)), only in very extreme circumstances. We refrain from explicitly stating how to implement each of the cells as this is out of scope for this paper. However, we can provide some typical examples of formalizations or implementations shown by the literature. Imitation can for example be imitating the direct neighbors or neighbors in a certain radius in a Cellular Automata implementation or imitating the agents in the same building or same network in other simulation. BDI agent theory can be used for rational choice [18] as this is problem solving that in principle does not consider social aspects. Game theory is a way of solving problems in the social strategic cell and there is enough literature to be found, see for example [1] for an introduction. For team reasoning typical examples are the work of Sugden [20] who explains and formalizes team reasoning, or [4] which is a book formalizing team work in agent systems. Institutions have been formalized by Esteva [9] and legal norms set by a state can be found in [9], both these examples can be used for ‘Institutionalized’ rules. Some normative frameworks are [19] which gives an overview of norms in simulation, while [2] shows an architecture that uses norms. For values one could consider the formalization by Heidari [11].
4 The Smoking Ban Simulation Using the Framework The described framework can assist in making a conceptual model for a social simulation but can also help extending an existing model. In this section we take such an existing model, the smoking ban model [3], and extend its deliberative aspects using the concepts presented in the framework. The smoking ban model is a good case study as it incorporates deliberation aspects from multiple cells in the CAFCA matrix such as legal norms, social norms (imitation) and values. The agents have a value priority based on a predetermined distribution. For example, one could have 60% law following, 30% norm following and 10% prefer smoking even when smoking becomes banned. The agents are either smokers or non-smokers. The introduction of the smoking ban in restaurants and bars creates a change in context that may or may not trigger a change of behavior. The simulation has three bars and 50 agents that can switch bars and go home. The agent actions are: Go into bar, leave bar, smoke inside, smoke outside (sub-optimal), and refrain from smoking (very sub-optimal). Considering the CAFCA framework the aspects that are incorporated are imitation (as following the social norm), ‘Institutionalized’ rules (when deciding to not follow the law) and ‘moral’ values (as in smoking even though there is a law, hedonism).
418
M. Jensen et al.
Using the model shown in Figs. 4 and 5 and the conceptual model of the smoking ban [3] an extension to the agent deliberation can be generated. A smoker can enter a bar they visits regularly. For them this is a normal context allowing for the repetition deliberation method, leading them to smoke inside. However sometimes the context can have changed slightly, leading to different considerations. E.g., if the bartender is not the usual one this might signal a change in management and thus a different smoking policy. Similarly, the introduction of a smoking ban leading to signs that it is forbidden to smoke inside. Note that these elements of the context are related to the intention of smoking. Thus, this intention leads to picking up certain cues from the context that might influence the intent! Given the change in context the smoker can now take a social view and draw in the presence of the other people present in the bar and see what they are doing. When many others still smoke as usual the smoker might just imitate them and also smoke. However, if they notice their group of friends they usually meet in the bar, and all of the friends are not smoking, they might move to a deliberation including the value of group affiliation and complying to what friends are doing. All of this behavior is still determined by the first row in the CAFCA matrix. Notice that we first look at the other people as people that have a similar goal of being in the bar and at second instance look at people that one has a special relation to that is worth taking into consideration. Thus the deliberation changed of nature while taking in more aspects of the context. It can become even more interesting when a person walks up to the smoker and tells it is forbidden to smoke. This will trigger more context exploration as now the smoker has to consider the other person as well, leading to for example game theoretical or (social) norm deliberation methods. If the smoker does not know the other person, they might consider a strategic way out of the situation such as: comply and give up smoking in the bar, go outside to have a smoke, stir up their friends to join in smoking. These possibilities again lead to deliberation methods going from left to right in the second row of the CAFCA matrix. Finally, the smoker also might consider whether the smoking ban should be followed and thus lead to a more long-term change of behavior. This is the kind of deliberation taking place on the normative row of the CAFCA matrix. Note that this normative reasoning can take place in parallel or even after the deliberation taking care of the present situation. Thus, the deliberation methods are not all exclusive! The more elaborate ones are triggered when more aspects of the context are taken into consideration. And reversely, more social and normative aspects of the context are taken into consideration when more long term influences are expected on the agent’s intentions and goals. This flexibility in deliberation processes and interaction with the context is very much like human deliberation. Humans do most things automatically whenever they can (being quick and efficient in deliberation) but in almost any situation they can change to a more complex deliberation if the situation requires. Figure 6 shows an example of meta deliberation for an agent. Based on the context different deliberation can be selected. This example shows an agent that smokes, is not strict rule following and not a team player. The first terms speaks for itself, but the
Towards Efficient Context-Sensitive Deliberation Individual
Habitual
Social
Collective C: in bar with friends, joining-in with friend group
C: arrive at bar (part of plan), next precondition: locate friends C: @bar, with friends C: @bar, no friends found C: seeing smoking ban sign
Smoking ban sign visible
419
No team player Rule following
C: Smoking forbidden (new law) Follow the law so stop smoking When not sufficient Not strict rule following Strategic Person telling you to stop smoking
C: @bar, no friends found Decision whether to stay at bar, utility of staying vs going home Not strict rule following
Normative
C: Smoking forbidden (new law) RI: evaluate utility loss of breaking the law (how much do I care about the law?) This utility could be influenced by the norm
Friend calls the agent to ask if he wants to join at the bar
C: get called by a friend to join to the bar RI: Mine and friends preffered actions with utility (go to bar, go to friends place), (solve with Game theory) C: Smoking not allowed, someone told you. (move to Rational choice)
Friend saying he will leave if the agent keeps smoking
C: Friend doesn't like smoking RI: Utility of that friend leaving (solve with game theory
C: Smoking forbidden (new law), unknown norm in this bar RI: Check the existing social norm in the bar. Evaluate how important it is to follow that norm Didn't find a solution
C: Smoking forbidden (new law), established the norm in the bar RI: Evaluate own values regarding the norm and breaking the norm (e.g. conformity, hedonism, benevolence (when with friends), security (because of health influence)
Fig. 6 Smoking ban context-sensitive agent: smoker, non strict rule following, non team player
not strict rule following means the agent can consider going to the normative layer to deliberate about rules and norms, and whether to follow them. Strict rule following could be an agent that never visits the normative layer, except for ‘moral’ values. The non team player means the agent will not consider visiting team reasoning to get the group of friends together rather the agent will just wait at the bar or leave the bar when friends do not show up. A team player would make an effort to get the group together. As shown in the example instead of predetermined preferences as in the original smoking ban model [3], the new deliberation takes context and the agent’s characteristics into account to make decisions that could lead to different actions but also the same action based on different reasons. Does the agent stop smoking because the law says so (Institutionalized rules)? Or is it because his friends tell him to (Game theory)? Or because of a value based decision that it is not healthy? (Moral values). With this meta deliberation we should be able to answer these questions. The ‘only’ thing to do now is to formalize the actual framework so we can actually start answering these questions with our simulations.
420
M. Jensen et al.
5 Discussion and Conclusion The presented framework is of course not a strict definition of which information should be considered. There are most probably some changes possible to the information written in the cells. However to move forward in this vague, dynamic, and difficult topic of context we need these rougher descriptions. Using CAFCA as a baseline for context already structured our research on context and gave it focus. It should be seen as a guideline for a context-sensitive deliberation agent framework. While the matrices may be quite overwhelming, when actually implementing we expect that the matrices may be easier to comprehend as the relevant information is limited through the domain that one wants to study. This combined with shortcuts in the implementation can make it feasible to implement a context sensitive system that is able to visit all the CAFCA cells (if needed). Research on such frameworks also can lead to advances in social science beyond scalable advanced agent models. E.g., which deliberation system to focus on when building a policy to bring about the best effect? Would imitation or changing rules achieve the highest impact? More research questions related to the smoking ban are e.g., Why is the agent following the law? Is this because of the agent’s friends who don’t want the agent to smoke? Does the agent imitate other people in the bar? Does the agent actually care about the law and follows it directly? Many questions already popup when considering this new framework. To conclude: contexts are vague, dynamic, and difficult to formalize. The fact that contexts and deliberation are not independent makes this even harder. While deliberating the context should be build on the fly, but this context can also influence the deliberation type used which in turn influences the type of context that is considered. With our framework we made a step forward towards unraveling context for social simulation and in the future using it as an explicit element in agent based simulations. This could be very beneficial as this way of using contexts can help creating efficient simulations which is, key to large scale, complex, socially aware systems. Think of social simulations for crises or policy support, but also social robotics and virtual characters for training and coaching.
References 1. Binmore, K.: A Very Short Introduction to Game Theory. Oxford University Press, Oxford (2007) 2. Castelfranchi, C., Dignum, F., Jonker, C.M., Treur, J.: Deliberative normative agents: principles and architecture. In: International Workshop on Agent Theories, Architectures, and Languages, pp. 364–378. Springer (1999) 3. Dechesne, F., Di Tosto, G., Dignum, V., Dignum, F.: No smoking here: values, norms and culture in multi-agent systems. Artif. Intell. Law 21(1), 79–107 (2013) 4. Dunin-Keplicz, B., Verbrugge, R.: Teamwork in Multi-Agent Systems: A Formal Approach, vol. 21. Wiley, Hoboken (2011) 5. Edmonds, B.: Complexity and context-dependency. Found. Sci. 18(4), 745–755 (2013)
Towards Efficient Context-Sensitive Deliberation
421
6. Edmonds, B., Moss, S.: From kiss to kids - an ‘anti-simplistic’ modelling approach. In: Davidsson, P., Logan, B., Takadama, K. (eds.) Multi-Agent and Multi-Agent-Based Simulation, pp. 130–144. Springer, Berlin (2005) 7. Elsenbroich, C., Verhagen, H.: The simplicity of complex agents: a contextual action framework for computational agents. Mind Soc. 15(1), 131–143 (2016) 8. Elsenbroich, C., Verhagen, H.: Integrating CAFCA-A lens to interpret social phenomena. In: ESSA, pp. 161–167 (2019) 9. Esteva, M., Rodriguez-Aguilar, J.A., Sierra, C., Garcia, P., Arcos, J.L.: On the formal specification of electronic institutions. In: Agent Mediated Electronic Commerce, pp. 126–147. Springer, Berlin (2001) 10. Ghorbani, A., Lorig, F., de Bruin, B., Davidsson, P., Dignum, F., Dignum, V., van der Hurk, M., Jensen, M., Kammler, C., Kreulen, K., et al.: The ASSOCC simulation model: a response to the community call for the covid-19 pandemic. Rev. Artif. Soc. Soc. Simul. (2020). https:// rofasss.org/2020/04/25/the-assocc-simulation-model 11. Heidari, S., Jensen, M., Dignum, F.: Simulations with values. Adv. Soc. Simul. 201–215 (2020) 12. Hinch, R., Probert, W., Nurtay, A., Kendall, M., Wymant, C., Hall, M., Lythgoe, K., Cruz, A.B., Zhao, L., Stewart, A., et al.: Effective configurations of a digital contact tracing app: a report to NHSX. 23, 2020 (2020). Retrieved July 2020 13. Jansen, M., Jager, W.: An integrated approach to simulating behavioural processes: a case study of the lock-in of consumption patterns. J. Artif. Soc. Soc. Simul. 2(2) (2000) 14. Jensen, M., Dignum, F., Vanhée, L., Pstrv, C., Verhagen, H.: Agile social simulations for resilience. In: Dignum, F. (ed.) Social Simulation for a Crisis, Chap. 14, pp. 379–408. Springer, Berlin (2021) 15. Kahneman, D.: Thinking, Fast and Slow. Macmillan, Basingstoke (2011) 16. Minsky, M.: The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. Simon and Schuster, New York (2007) 17. Newell, A., Card, S.K.: The prospects for psychological science in human-computer interaction. Hum.-Comput. Interact. 1(3), 209–242 (1985) 18. Rao, A.S., Georgeff, M.P., et al.: BDI agents: from theory to practice. In: ICMAS, vol. 95, pp. 312–319 (1995) 19. Savarimuthu, B.T.R., Cranefield, S.: Norm creation, spreading and emergence: a survey of simulation models of norms in multi-agent systems. Multiagent Grid Syst. 7(1), 21–54 (2011) 20. Sugden, R.: The logic of team reasoning. Philos. Explor. 6(3), 165–181 (2003) 21. Zimmermann, A., Lorenz, A., Oppermann, R.: An operational definition of context. In: Kokinov, B.N., Richardson, D.C., Roth-Berghofer, T., Vieu, L. (eds.) Modeling and Using Context, 6th International and Interdisciplinary Conference, CONTEXT Roskilde, Denmark. Lecture Notes in Computer Science, vol. 4635, pp. 558–571. Springer (2007)
Better Representing the Diffusion of Innovation Through the Theory of Planned Behavior and Formal Argumentation Loic Sadou, Stéphane Couture, Rallou Thomopoulos, and Patrick Taillandier
Abstract Agent-based simulation has long been used to study the dynamics of adoption and diffusion of innovations. However, the vast majority of these works are limited to an abstract and simplified representation of this process, which does not allow to explain the reasons for the change of opinion of an agent. In order to go further in the explanation of these changes, we present a generic model based on the theory of planned behavior and on formal argumentation. Each agent has the possibility to exchange arguments with another and to build its opinion on an innovation from the set of arguments it knows. An application of the model is proposed to study the adoption of communicating water meters by farmers on the Louts river (South-West of France). Keywords Agent-based simulation · Diffusion of innovation · Argumentation · Theory of planned behavior
1 Introduction Many studies have already focused on modeling the process of innovation diffusion. A natural way of studying such a process is to use agent-based modeling [10], each agent representing an individual that can influence the others on their adoption of the innovation. However, most of these models represent the opinion of each agent on an innovation by a numerical variable that evolves directly during their interactions with
L. Sadou (B) · S. Couture · P. Taillandier INRAE, University of Toulouse, MIAT, Castanet Tolosan, France e-mail: [email protected] R. Thomopoulos University of Montpellier, INRAE, Institut Agro, IATE, Montpellier, France P. Taillandier IRD, Sorbonne University, UMI UMMISCO, Bondy, France Thuyloi University, JEAI WARM, Hanoi, Vietnam © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_32
423
424
L. Sadou et al.
other agents. This type of representation provides little information on the change of opinion of the agent as the reasons for its change are not known. To overcome this limitation, a relevant framework is the formal argumentation [1]. Argumentation deals with situations where information contains contradictions because it comes from several sources or corresponds to several points of view that possibly have different priorities. If several agent-based models already integrate argument exchanges to represent opinion dynamics processes [8, 11, 13, 15, 18], to our knowledge, no model proposes to explicitly integrate arguments to simulate the innovation diffusion process. We therefore propose in this paper a generic model in which the knowledge of each agent is explicitly represented in the form of arguments, which carry information about the innovation. These arguments are the objects that the agents will exchange during their interactions. The advantage of this approach is that it allows one to trace the state of knowledge of an agent in order to understand the evolution of its behavior in front of an innovation. We also propose to represent the decisional model of the agents with the Theory of Planned Behaviour (TPB). This theory, very classical in psychology, offers an integrative framework to formalize the behavior of agents [2, 9].
2 Related Works Zhang and Vorobeychik [20] proposed a critical review of innovation diffusion models in 2019. In doing so, they proposed to categorize these models based on how the models represent the decision to adopt. Among these categories, we can distinguish cognitive agent models that are closest to our concerns: they aim to explicitly represent how individuals influence each other in cognitive and psychological terms. A particularly popular model in this category is the relative agreement model of [6]. This model, which builds on Rogers’ observations [12], focuses on the notion of opinion about an innovation. The individual’s opinion and uncertainties are represented by numerical values that evolve during interpersonal interactions. While no model of innovation diffusion has used the concept of argument, in the field of opinion dynamics, several works have tried to better represent the impacts of interpersonal interactions on opinion through the use of this concept. Some of these works such as [11] propose a simple formalization of arguments in the form of a numerical value. Although these works show interesting results on processes such as bipolarization, they do not provide information on argumentative reasoning and do not explicitly represent the tensions between arguments. To overcome this limitation, several works such as [4, 8, 15, 17] have proposed the use of the system introduced by Dung [7]. These works illustrate the interest of using such a formalism to represent arguments in the framework of an opinion dynamics model. Among these works, [15] is particularly interesting for us because they propose a complete process of opinion construction from arguments, which allows to
Better Representing the Diffusion of Innovation Through the Theory …
425
easily integrate the heterogeneity of the agents through the explicit representation of the point of view of each agent on certain topics (e.g. environment, economy, etc.). We therefore propose to take up, within the integrative framework of the theory of planned behavior, the basis of the innovation diffusion model proposed by [6] and to integrate argumentation to represent the cognition of agents. Concerning argumentation, we used a model close to the one proposed by [15] by enriching it to integrate, among other things, the notions of trust in sources.
3 Proposed Model 3.1 Arguments Arguments are the objects that represent the pieces of information about the innovation that agents can understand and exchange. While Dung considers arguments as abstract objects with no descriptive data, other works propose to extend this concept by adding semantics to arguments [3, 15, 16]. In this work, we have chosen to use the representation proposed in [15] in which the data composing the argument plays the role of support in the knowledge evaluation. An argument is a tuple (I, O, T, S, C, T s): • • • • • •
I : the identifier of the argument O: the option concerned by the argument T : the type of argument (pro: +, con: −) S: the proposition of the argument C: the criteria (themes) linked to the argument T s: the type of the source of the argument
Arguments are linked together by the notion of attack. An attack happens when an argument challenges another argument. For more details on attacks, see [19].
3.2 The Agents The model is composed of agents individual, which represent the potential adopters of an innovation. The decision model of these agents is based on the TBP which is based on the notion of intention to behave for an individual. This intention is derived from 3 variables: attitude, subjective social norm, and perceived behavioral control (PBC). The attitude represents the knowledge and opinion that an individual has about a behavior (in our case the use of innovation). The subjective norm is the individual’s perception of the adoption intention of her/his social network. Finally, the PBC is the capacity felt by the individual to adopt the behavior (in terms of cost, time, skills, technical aids, ...).
426
L. Sadou et al.
The intention can thus be calculated with the values of these 3 variables. Weighting each variable according to its importance, [9] propose the following equation to calculate the intention: p (1) Ii = wia ai + wis si + wi pi with: Ii the intention of agent i, ai si pi respectively the values of attitude, subjective p norm and PBC of agent i and wia wis wi respectively the weights of attitude, subjective norm and PBC of agent i. Our proposal is to compute the attitude of the agents from their knowledge about the innovation, modeled as an argument graph. Concerning the subjective norm, we propose to draw inspiration from the work of [5], who suggests that during an interaction between two individuals the influence of one on the other depends on the opinions and certainties they have on the subject. Concerning the PBC, which is specific to the type of innovation studied and to the individual concerned, we propose to transcribe it in the form of a variable specific to each individual, which may or may not be constant depending on the case of application. We also define the notions of uncertainty on the attitude and the subjective norm through two real variables between 0 and 1 (0: total certainty, 1: total uncertainty). From these two variables, we define the uncertainty on the intention calculated as follows. Let u ia and u is be respectively the uncertainties of the agent i on its attitude and subjective norm values, the uncertainty on the intention u iI is defined by: u iI =
u ia wia + u is wis wia + wis
(2)
Thus, each agent has the following attributes: • argument graph: an argument graph where the vertices are the arguments known by the agent and each weighted arc represents an attack from one argument to another with the value of the attack strength for the agent; • informed: boolean indicating if the agent has enough arguments (n args ) to evaluate its individual benefit; • importance of the criteria: each criterion (theme) of the arguments (C of an argument) is linked to a real value between 0 (unimportant) and 1 (very important) which represents its importance for the agent; • trust in the source type: each type of argument source (T s element of an argument) is linked to a numerical value between 0 (no trust) and 1 (total trust) which represents the agent’s trust in the type of source; • neighbors: all the agents with which it is linked through the social network; • attitude: real value between −1 and 1 that quantifies the benefit that the innovation brings to the agent (−1 very negative effect; 1 very beneficial); • attitude uncertainty: real value between 0 and 1 that represents its uncertainty about its personal benefit. A value close to 0 means little uncertainty and vice versa;
Better Representing the Diffusion of Innovation Through the Theory …
427
Fig. 1 Agent’s decision process
• subjective social norm: real value between −1 and 1 which corresponds to an estimate of the opinion that other agents have of the innovation (−1 very bad opinion; 1 very good opinion); • uncertainty about the subjective social norm: A real value between 0 and 1 that represents the uncertainty about one’s subjective norm. A value close to 0 means little uncertainty and vice versa; • weight of the attitude in the calculation of the intention: real value representing the influence of attitude in the calculation of intention; • weight of the subjective norm in the calculation of the intention: real value representing the influence of attitude in the calculation of intention; • weight of PBC in the calculation of intention: actual value representing the influence of attitude in the calculation of intention; • intention: real value between −1 and 1 calculated from attitude, subjective norm, and PBC; • intention uncertainty: Actual value between 0 and 1 calculated from attitude uncertainty and subjective norm uncertainty; • decision status: represents the agent’s adoption state (See Fig. 1).
3.3 The Dynamics of the Model At each simulation step, 4 processes are executed in the following order: 1. New arguments coming from an external source (advertisement, specialized press article...) are added to the arguments of the agents seeking information, i.e. in the state information request.
428
L. Sadou et al.
2. Each agent having received arguments revises its beliefs according to its new internal argumentation graph to compute its new attitude and intention value then its adoption state. 3. Each agent, according to its information state and intention, can exchange one or more arguments with its neighbors. 4. The agents revise their beliefs a second time to update their decision variables. Concerning the first step, an agent can become aware of an argument, mobilize it during interactions, but also forget it. Indeed, empirical research suggests that people have limited abilities to remember information. As in the ACTB model [11], we consider that the number of arguments with which an agent can form an opinion is limited and the agent forgets the arguments that it has not mobilized during a given time; its memory is thus represented as a queue. Concerning Step 2 and 4, the revision of beliefs is based on the notion of strength of an argument for an agent. For an agent i, the strength Fi (a) of an argument a is defined by: ac × i c (3) f i (a) = con f i (a) c∈C
with C the set of criteria, ac the value of criterion c for argument a, i c the importance of criterion c for agent i and con f i (a) the confidence that agent i has in the source type of argument a. From the notion of strength of an argument, we compute a value for a set of arguments. The value vi (A) for an agent i for the argument set A is defined as follows: f i (a) × t ype(a) (4) vi (A) = a∈A a∈A f i (a) with: t ype(a) =
1 si a.T = + −1 si a.T = −
As seen previously, the intention variable is calculated from the attitude, the subjective social norm and the PBC (see Eq. 1). Agents estimate their attitude from their arguments using the following procedure: 1. Simplifying the argument graph (A, R) by removing mutual argument attacks with the following rule: delete each arc (a, a ) ∈ R ∧ f i (a ) > f i (a). 2. Compute the set of preferred extensions of the simplified argumentation graph. 3. Compute the attitude from the preferred extensions: evaluate the value of each extension using the Eq. 4. The extension retained is the one which absolute value is maximal. We consider that the uncertainty on the attitude does not change during the simulation: it is specific to each agent but remains constant. A first element which intervenes in the decision of the agents is their state of interest with regard to the innovation. The state of interest ei of an agent i concerning
Better Representing the Diffusion of Innovation Through the Theory …
429
the innovation is calculated from the intention of the agent Ii and its uncertainty on its intention u iI : ⎧ if Ii − u iI > 0 ⎨ yes if Ii + u iI < 0 ei = no (5) ⎩ maybe otherwise The decision state of an agent is determined according to the rules presented in Fig. 1. These rules take into account the state of interest and the information attribute: 1. If an agent does not have enough information (¬informed): • If the agent is not interested (ei = no), then its decision state is that it is not concerned (not concerned). It no longer pays attention to information it might receive from outside potential adopters. • If it is interested (ei = yes), then it enters the information seeking state (information request). 2. Once it has received enough arguments (informed): • If the agent is not completely interested (ei = no or ei = maybe), it decides not to adopt the innovation (no adoption). • If the agent is interested (ei = yes), then it will go to the pre-adoption state (pre adoption). This state corresponds to a period during which the agent thinks about its choice. The interactions it has with other agents can make it change its mind. 3. During the pre-adoption state (pre adoption), the agent continues to receive information: • If its interest remains positive during a given period of time, the agent will adopt the innovation and put it into practice (adoption). • If not, the agent will not adopt it. 4. The adoption state is the phase during which the agent puts the innovation into practice. The use of the innovation brings the agent a certain satisfaction which is measured during q time steps: • If its average satisfaction during this period is positive, it is defined as satisfied with the innovation (satisfied). • If not, it is dissatisfied with the innovation (unsatisfied). Interactions between agents At each step of the simulation an agent can be influenced by another agent. This influence will be marked by two disjoint processes: the updating of the subjective norm and the exchange of arguments. To come back to the first point, we consider that an agent interacting with another one will update its subjective norm, i.e. its perception concerning the adoption intention of its social network. The equation used for this is inspired by the work of [5] on social influence. Let agent i be influenced by agent j, its subjective norm si (t + 1) at simulation step t + 1 will be calculated by:
430
L. Sadou et al.
si (t + 1) = si (t) + μ(1 − u Ij )(I j − si (t))
(6)
Similarly, its uncertainty about its subjective norm, u sj (t + 1), will be computed by: u is (t + 1) = u is (t) + μ(u Ij − u is (t))
(7)
with: • μ: coefficient allowing to accentuate the influence of the others. Generally μ = 0.1. The second process corresponds to the direct influence of another agent through the exchange of arguments. As in most dynamic models of opinion, we consider that in order to exchange arguments, the agents must not be too dissimilar in terms of opinion. The theory of planned behavior dissociates the subjective norm from the intention and the method used to compute the similarity between two agents i and j only takes into account the opinions of individuals (for us the intention). Thus, when agent j interacts with (and thus tries to influence) agent i, the similarity between these individuals will be calculated from their intention and uncertainty. Deffuant [5] propose a similarity calculation such as: sim(i, j) = min(Ii + u iI , I j + u Ij ) −max(Ii −
u iI , I j
−
u Ij )
(8) (9)
with: • Ii et u iI : the agent’s intention i and its uncertainty. If this similarity is greater than the uncertainty of agent j trying to influence agent i, then, j will be able to give an argument to agent i. The argument transmitted by j will depend on its decision and information state: • An agent not concerned does not transmit an argument. • An agent who is not satisfied with the innovation or who does not adopt it will transmit an argument against the innovation. • An agent in information search can transmit an argument for or against the innovation. The type of argument is not taken into account because the agent is at the stage where it does not yet have a stable intention. • An agent with a positive opinion, i.e. who is in the process of adopting, who adopts or who is satisfied with the innovation, will transmit a positive argument.
Better Representing the Diffusion of Innovation Through the Theory …
431
4 Application 4.1 Context In the Louts region (South-West of France), mechanical meters, which belong to farmers, fail to estimate water consumption correctly because of their low accuracy. This is an advantage for the farmers, as there is less risk of being overcharged if the allocated quota is exceeded. For this reason, the Ministry of the Environment has required a periodic refurbishment of the metering system every 9 years. The institution in charge of managing water distribution in this area is counting on this regulation to install its new communicating meters. These new meters are more precise and allow to follow in real time the consumption of each farmer and thus to better manage the use of water. However, the institution is having difficulty convincing farmers to install this device because they perceive it negatively. This obstacle is closely linked to the distrust that farmers have of the institution. A large part of the farmers believe that the new meter does not benefit them and that it is only useful for the institution. However, a minority, more inclined to new technologies, finds arguments in favor of these meters, such as the management of material leaks, the automatic calculation of consumption to regulate at best the withdrawals in order not to exceed the allocated quota and to limit the losses. The analysis of various scientific documents and websites has allowed us to identify thirty-five arguments (14 (40 %) against and 21 (60 %) in favour) divided according to five criteria: confidence (in the institution), ecology, social, productivity, financial. We propose, based on these arguments, to study the changes of intention and adoption decision of the agents in relation to the communicating water meters. In particular, we propose to follow two indicators: the average intention of agents and the rate of adopters.
4.2 Parameterization of the Model The theory of planned behavior requires to define a certain number of parameters. To give values to these parameters, we used the psychological profiles defined by [9] which provides for each of these profiles the proportion of the profile as well as a mean and a standard deviation for the initial values of the TPB. Concerning the other parameters, we used the following values: • Number of agents: 60 (number of irrigating farmers of the Louts who subscribe to the pumping system).
432
L. Sadou et al.
• Social network: allocation of connected agents according to the Watts–Strogatz small world construction algorithm (with average node degree K = 4 and probability of randomly “reconnecting” a social connection p = 0.2). • Adoption threshold: 0.56 [9] • Number of simulation steps between adoption and satisfaction calculation (q): 15. • Quantity of arguments for an agent to be considered as informed in f args : 4. • Maximum quantity of arguments per agent maxargs : 7. • Arguments initially known: a random number between 1 and maxargs arguments drawn randomly in the set of arguments. • Uncertainty on the attitude: draw according to a normal law N (μ, σ 2 ) with μ and σ defined according to the agent’s group. • Subjective norm: draw according to a normal distribution N (μ, σ 2 ) with μ and σ defined according to the agent’s group. This value will evolve according to the interactions with other agents (Eq. 6). • Uncertainty on the subjective social norm: drawing according to a normal distribution N (μ, σ 2 ) with μ and σ defined according to the agent’s group. This value will evolve according to the interactions with other agents (Eq. 7). • Weight of the attitude in the intention: 0.229 [9]. • Weight of the subjective norm in the intention: 0.610 [9]. • Weight of the PBC in the intention: 0.161 [9]. The complete model and all the data and parameters used for the experiments are available on Github.1 The model has been implemented using the GAMA platform [14] and in particular its plugin dedicated to argumentation [15].
4.3 Analysis of the Stochasticity In a first experiment, we analyze the impact of the stochasticity of the model on the results. The main objective is to find a threshold value of replications beyond which an increase in the number of replications would not imply a significant decrease in the difference between the results. To do this, we compare the average agent intention and adopter rate for different numbers of replications (from 0 to 500). Figure 2 shows the standard deviation obtained for the 2 indicators. These results show that 100 replications are enough to have a standard deviation close to the limit. It is therefore not useful to go further.
4.4 Evolution of Agent Intention and Number of Adopters Figure 3 presents the results in terms of the evolution of the average agent intention and adopter rate. 1
https://github.com/LSADOU/Innovation-Argumentation-Diffusion.
Better Representing the Diffusion of Innovation Through the Theory …
433
Fig. 2 Standard deviation of mean intention and adopter rate after 3000 simulation steps as a function of number of replications
Fig. 3 Evolution of average intent and adopter rate for 3000 time steps (100 replications)
A first observation is a tendency towards a greater acceptance of communicating water meters with a final adoption rate higher than 0.7. It is interesting to note that similar phenomena were observed when mechanical water meters were introduced. A second observation is that from the beginning of the simulation, the agents have a rather positive opinion on the communicating water meters (average intention higher than 0.2) leading, once the different stages defined by Rogers are passed, to a significant adoption of the technology. The average intention then tends to increase, first marking the influence of agents with a positive view of communicating water meters. In a second phase, this increase becomes almost zero, marking a phase where agents being more confident in their opinion and being more polarized in terms of intention, they tend to stop trying to convince agents with an intention very different from theirs. A last important element is that the simulations tend to stabilize after 2500 simulation steps.
5 Conclusion In this paper, we have proposed a model of innovation diffusion based on the theory of planned behavior and on the explicit representation of innovation information exchange through arguments. The model allows to take into account the heterogeneity of the actors by linking the information carried by the arguments and the agent’s preferences (trust in information sources, preference criteria). We have also integrated an explicit description of the innovation adoption states and the dynamics of the resulting interactions.
434
L. Sadou et al.
An application of this generic model has been proposed for the issue of farmers’ adoption of communicating water meters. The first experiments carried out illustrate the type of studies that can be conducted. To go further in this study, an important work will concern the data collection. Indeed, some of the parameters of the model used for the experiments were estimated or drawn at random. A future objective is to set up field surveys to obtain these parameters. Similarly, through questionnaires, we would like to obtain data on the evolution of farmers’ opinions on communicating water meters in the Louts region, which would allow us to validate the results obtained by simulation. Acknowledgements This work has been funded by INRAE (MathNum department) and by the #Digitag convergence institute (ANR 16-CONV-0004).
References 1. Besnard, P., Hunter, A.: Elements of Argumentation. The MIT Press, Cambridge (2008) 2. Borges, J.A.R., Oude Lansink, A.G., Marques Ribeiro, C., Lutke, V.: Understanding farmers’ intention to adopt improved natural grassland using the theory of planned behavior. Livest. Sci. 169, 163–174 (2014) 3. Bourguet, J.R., Thomopoulos, R., Mugnier, M.L., Abécassis, J.: An artificial intelligence-based approach to deal with argumentation applied to food quality in a public health policy. Expert Syst. Appl. 40(11), 4539–4546 (2013) 4. Butler, G., Pigozzi, G., Rouchier, J.: Mixing dyadic and deliberative opinion dynamics in an agent-based model of group decision-making. Complexity 2019 (2019) 5. Deffuant, G.: Improving agri-environmental policies: a simulation approach to the cognitive properties of farmers and institutions (2001) 6. Deffuant, G., Huet, S., Amblard, F.: An individual-based model of innovation diffusion mixing social value and individual benefit. Am. J. Sociol. 110(4), 1041–1069 (2005) 7. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. J. 77, 321–357 (1995) 8. Gabbriellini, S., Torroni, P.: A new framework for ABMs based on argumentative reasoning. In: Advances in Social Simulation, pp. 25–36. Springer, Berlin (2014) 9. Kaufmann, P., Stagl, S., Franks, D.W.: Simulating the diffusion of organic farming practices in two New EU Member States. Ecol. Econ. 68(10), 2580–2593 (2009) 10. Kiesling, E., Günther, M., Stummer, C., Wakolbinger, L.M.: Agent-based simulation of innovation diffusion: a review. Cent. Eur. J. Oper. Res. 20(2), 183–230 (2012) 11. Mäs, M., Flache, A.: Differentiation without distancing. Explaining bi-polarization of opinions without negative influence. PloS One 8(11) (2013) 12. Rogers, E.M.: Diffusion of Innovations, 5th edn. Free Press, New York (2003) 13. Stefanelli, A., Seidl, R.: Moderate and polarized opinions. Using empirical data for an agentbased simulation. In: Social Simulation Conference (2014) 14. Taillandier, P., Gaudou, B., Grignard, A., Huynh, Q.N., Marilleau, N., Caillou, P., Philippon, D., Drogoul, A.: Building, composing and experimenting complex spatial models with the GAMA platform. GeoInformatica 23(2), 299–322 (2019) 15. Taillandier, P., Salliou, N., Thomopoulos, R.: Coupling agent-based models and argumentation framework to simulate opinion dynamics: application to vegetarian diet diffusion. In: Social Simulation Conference 2019, Mainz, Germany (2019) 16. Thomopoulos, R., Moulin, B., Bedoussac, L.: Supporting decision for environment-friendly practices in the agri-food sector. Int. J. Agric. Environ. Inf. Syst. 9(3), 1–21 (2018)
Better Representing the Diffusion of Innovation Through the Theory …
435
17. Villata, S., Cabrio, E., Jraidi, I., Benlamine, S., Chaouachi, M., Frasson, C., Gandon, F.: Emotions and personality traits in argumentation: an empirical evaluation1. Argum. Comput. 8(1), 61–87 (2017). https://doi.org/10.3233/AAC-170015 18. Wolf, I., Schröder, T., Neumann, J., de Haan, G.: Changing minds about electric cars: an empirically grounded agent-based modeling approach. Technol. Forecast. Soc. Chang. 94, 269–285 (2015) 19. Yun, B., Thomopoulos, R., Bisquert, P., Croitoru, M.: Defining argumentation attacks in practice: an experiment in food packaging consumer expectations. In: Graph-Based Representation and Reasoning, pp. 73–87 (2018) 20. Zhang, H., Vorobeychik, Y.: Empirically grounded agent-based models of innovation diffusion: a critical review. Artif. Intell. Rev. 1–35 (2019)
Using Qualitative Data to Inform Behavioural Rules
Documenting Data Use in a Model of Pandemic “Emotional Contagion” Using the Rigour and Transparency Reporting Standard (RAT-RS) Patrycja Antosz , Ivan Puga-Gonzalez , F. LeRon Shults , Justin E. Lane , and Roger Normann Abstract This paper utilizes the recently developed Rigour and Transparency Reporting Standard as a framework for describing aspects of the use of data in an agent-based modelling (ABM) EmotiCon project studying emotional contagion during the COVID-19 pandemic. After briefly summarizing the role of the ABM in the wider EmotiCon project, we outline how we intend to utilize qualitative data from a natural language processing analysis of Twitter data and quantitative data from a nationally representative survey in model building. The presentation during the SSC 2021 will elaborate on the outcome of implementing the idea. Keywords Emotional contagion · COVID-19 · Social media analysis · Natural language processing · Agent-based modeling · Artificial sociality
1 Introduction This paper, for the Using qualitative data to inform behavioral rules special track of the Social Simulation Conference 2021, describes plans for the use of Twitter content in the construction of an agent-based model (ABM) in the EmotiCon project studying emotional contagion during the COVID-19 pandemic. At the current stage, we describe an idea of informing an ABM with Twitter content and we cannot guarantee that, given possible obstacles in data processing and preparation, this idea will become reality. The conference presentation will report on the progress in implementation. Should we succeed, we would like to share and reflect on our process with conference participants. Should we fail, we would like to point to the obstacles we faced, and engage in a discourse of possible ways forward in using natural language P. Antosz (B) · I. Puga-Gonzalez · F. L. Shults · R. Normann Center for Modeling Social Systems, NORCE, Universitetsveien 19, 4630 Kristiansand, Norway e-mail: [email protected] F. L. Shults University of Agder, Universitetsveien 25, 4630 Kristiansand, Norway J. E. Lane ALAN Analytics S.R.O, Lanova 2, 82101 Bratislava, Slovakia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_33
439
440
P. Antosz et al.
processing (NLP) of Twitter content to inform ABMs. Either way, we look forward to discussing these issues with the broader ABM community. The paper is organized as follows. In the next section, we briefly describe the Emotional contagion (EmotiCon) project. Subsequently, we use aspects of the Rigour and Transparency Reporting Standard [1] to outline the idea of how Twitter content can (hopefully) be used to inform the EmotiCon ABM. The RAT-RS consists of five question suites: (a) model aim and context, (b) conceptualization (what and why?), (c) operationalization (how and why?), (d) experimentation, and (e) evaluation. We follow the authors’ advice and fill in the RAT-RS during the modelling process (ibid., p. 13). Last, we share reflections on representativeness of Twitter content.
2 The Emotional Contagion (EmotiCon) Project Computer modeling and simulation has been widely used during the pandemic to study and forecast the spread of COVID-19, contributing to the policy goal of “flattening the curve” of disease contagion so that cases do not pass a threshold beyond which healthcare systems collapse due to lack of capacity to care for those who are infected and sick. In a similar way, the rapid spread of misinformation, stigma, and fear through “emotional contagion” [2] can pass a threshold beyond which other interpersonal and institutional systems necessary for social cohesion might collapse, leading (for example) to ideological polarization, culture wars, or psychological alienation and anxiety. Acknowledging existing ABMs of emotional contagion, including some based on social media analysis [3, 4], and extensive use of ABMs to study some of the possible causes and impacts of the COVID-19 pandemic [5, 6], the EmotiCon ABM we propose will bridge the two research trends, while utilizing a combination of insights from qualitative and quantitative data. The EmotiCon ABM is developed with the support of Research Council of Norway funded project “Emotional Contagion: Predicting and Preventing the Spread of Misinformation, Stigma, and Fear during a Pandemic.” The main goal of the EmotiCon project is to develop user-friendly multi-agent artificial intelligence tools that will enable Norwegian municipalities and other governmental agencies to (1) analyze and forecast the societal effects of their public health responses and social countermeasures to pandemics and (2) experiment with alternative intervention strategies for “flattening the curve” of psychologically and politically debilitating social contagion before trying them out in the real world. EmotiCon has collected Twitter content (via access to the Twitter streaming API and new NLP techniques to analyze its content) and attitude data (via a representative panel survey) that will be used to specify, calibrate, and validate an artificial society (or “digital twin”) of Norway. Simulation experiments will be designed to explore the psychological mechanisms and cultural factors that have shaped the societal reaction to COVID19 and to forecast the patterns in which individuals and communities are likely to understand and react to future pandemics.
Documenting Data Use in a Model of Pandemic “Emotional Contagion” …
441
3 Using Data to Inform the EmotiCon ABM 1. Model aim and context 1.2 What is the purpose of the model? The main purpose of the ABM EmotiCon model is explanation—establishing a possible causal chain from a set-up to its consequences in terms of the mechanisms in a simulation [7]. 1.3 What domain does the model research? Attitude formation towards following local restrictions related to the COVID-19 pandemic. 1.4 What (research) question(s) is the model addressing? If, and under what conditions, the coherent causal mechanism of multidimensional (cognitive, affective and behavioural) attitude formation, as proposed in the agentbased model, is able to elicit the patterns of declared compliance with safety regulations in the Norwegian population during the COVID-19 pandemic? 1.5 What is the MAIN driver for your initial model development step? A selection of theories. These include identity fusion theory, moral foundations theory, reactance theory, cognitive dissonance theory, and emotional contagion theory. Elements of those were previously implemented as ABMs (see Conceptualisation). 1.6 Explain why this MAIN driver was chosen? Because theories propose validated parts of the wider causal mechanism that can potentially be responsible for forming attitudes toward compliance with communitybased rules of conduct. 1.7 What is the target system that this model reproduces? The model represents a process of attitude formation among humans. The attitude in question has to do with willingness to following the local dugnad (that is, follow social distancing and other regulations or recommendations). In the beginning of the pandemic, the Norwegian government opted to use the word ‘dugnad’ to instil a sense of shared responsibility and solidarity in the Norwegian population. This wording was not chosen at random. Dugnad refers to an ancient old-Norse Viking era custom and means to participate in unpaid, voluntary, and often joint work efforts. In traditional Norwegian society this was a widespread form of collective self-help where neighbours and rural people joined forces to help each other tasks that were too demanding for the individual. Everyone helped out, and everyone got help. The modern variant of ‘dugnad’ most Norwegians will know from participating in helping the local football club, their children’s kindergarten or doing the annual tidying up of the neighbourhood in the spring. Many Norwegians also spend a lot of their time each
442
P. Antosz et al.
year doing work for various voluntary organisations under the heading of dugnad. Dugnad as a concept is still a strong part of the Norwegian identity and culture, and was in a national poll in 2004 awarded status as Norway’s national word [8]. The Norwegian government’s use of the word dugnad to mobilise the population to follow the guidelines has spurred some criticism since the burdens of the pandemic has not been equally shared. Certain groups in the society have taken most of the burdens of the Covid-19 measures: vulnerable children, lonely youths, individuals exposed to violence in their homes, the old and sick, those who lost their jobs, or those who had to close their place of business. When the prime minister of Norway was fined for not following the guidelines herself in the winter of 2021, she gathered a larger group of people for a private dinner. We observe that the government’s use of the word dugnad has occurred less frequently when public officials have spoken publicly about new or revised measures. The attitude towards dugnad is assumed to have three components: cognitive (representation of opinion about COVID-19), affective (emotions related to COVID19) and behavioural (following restrictions of the local government). Humans share information via two channels: face to face and online. When communicating face to face, both the cognitive and the affective information that lead to following restrictions are shared. Cognitive dissonance mechanisms govern processing of received information, and emotional contagion mechanisms govern the rules for the diffusion of affective content. In online communication, cognitive content is processed in the same way, however the transmission of the affective state from the sender to the receiver is disrupted. Moreover, in face to face and online communication channels humans partake in different social networks. Real life networks are set up on the basis of socio-demographic homophily, but online communication networks are based on homophily and heterophily of opinions. As a result of decreased variance in online opinions that a human agrees with, online opinion bubbles are more homogenous than their real-life equivalents. On the basis of information and emotion diffusion, humans form their willingness to comply with local dugnad. There is no representation of natural environment in the model, however information (cognitive, affective and behavioural content of the attitude) is represented as a separate agent type. 1.8 Explain why this target system and these boundaries were chosen The EmotiCon project was funded as part of an emergency call from the Research Council of Norway in which researchers were invited to propose projects aimed at producing policy-relevant insights for the Norwegian government’s response to COVID-19 and future pandemics. 2. Conceptualisation 2.1 What previous model is used (or models are used) as driver in this model? Give reference(s) to the model/models • The HUMAT socio-cognitive architecture [9]; • The Terror management model (TMM; anxiety theory, social identity theory, identity infusion theory) [10];
Documenting Data Use in a Model of Pandemic “Emotional Contagion” …
443
• “A Generative Model of Mutually Escalating Anxiety between Religious Groups “ (MERV) [11]. 2.2 Why is/are this/these previous model(s) used? Those are models that have elements of relevant theoretical mechanisms that are relevant for attitude formation. 3 What are the elements of this/these previous model(s)? HUMAT: • cognitive content of information; • cognitive dissonance mechanisms of adopting received information; • mechanisms used by humans to estimate source persuasiveness; Terror management model (TMM): • Agents with characteristics such as religiosity (operationalized as tendency to believe in supernatural agents and participate in rituals that imaginatively engage them), tolerance for threats, and group identity; • Terror management mechanisms for easing anxiety by participating in rituals; MERV: • Agents similar to the ones in terror management model, but with interaction rules also informed by social identity theory and identity fusion theory as specified and integrated through the Information Identity System [12]; • Involved an adaptation of Epstein’s Agent_Zero. 2.4 and 2.5 Describe how you moved from the previous model elements to the elements of your model and Explain why elements of the previous model were included, excluded or changed in the current model
Agents
Interactions
EmotiCon element
Previous model element
Reason for inclusion
Socio-cognitive architecture: cognitive dissonance in information processing
HUMAT
The need for representing cognitively motivated information exchange over social networks
Affect
TMT & MERV
The need to represent emotional contagion in social interactions
Socio-cognitive architecture: rules for information exchange (inquiring and signaling)
HUMAT
The need for representing cognitively motivated information exchange over social networks (continued)
444
P. Antosz et al.
(continued) EmotiCon element
Previous model element
Reason for inclusion
Rules for assessing source persuasiveness (implementation of social identity theory & identity fusion theory)
MERV
The need to distinguish different information processing depending of the information source (sources belonging to ingroup and outgroup of ego)
2.8 Describe the procedures and methods used to conceptualise the key target system elements as model elements. How did you make use of the evidence? What other sources did you utilise to conceptualise model elements? EmotiCon has three major theoretical pillars: – Cognitive consistency theories; – Emotional contagion theory; and – Social identity & identity fusion theories. Cognitive consistency theories are a group of theories that develop the concept originally described as cognitive dissonance by Leon Festinger. Dissonance between cognitions (or broader: cognitive inconsistency) is a motivational force for change in knowledge [13] or behaviour [14]. Dissonance reduction can be achieved in various ways [15]. Following an assumption from motivational intensity theory, the higher the motivation, the more effortful strategies can be implemented in the search of a better alternative [16, 17]. Therefore, that the higher the level of dissonance, the more effortful strategies agents can implement to resolve it [18]. Ex ante-strategies of EmotiCon agents to resolve cognitive dissonance, according to difficulty involve: – distraction and forgetting; – inquiring—collecting information from alters, which results in changing dissonant/consonant cognitions; – signaling (in cases of social dilemmas)—trying to convince alters to ego’s point of view. Emotional contagion theory (ECT) was formulated and developed over the last few decades by Elaine Hatfield and colleagues [2, 19]. Our Norwegian surveys includes the “emotional contagion scale,” developed and validated by Doherty [20], which has been translated and validated in several other contexts. The original hypotheses proposed and empirically tested in ECT deal with face-to-face contact between individuals, and its main postulated mechanisms have to do with mimicry, synchrony, and the effect of facial feedback on emotional experience [21]. Under certain conditions, people tend to “catch” the emotions of those with whom they are interacting. However, the concept of “emotional contagion” has also been adopted and adapted by several other studies of online interaction, focusing on the spread of emotion across online social networks (e.g., [22–24]). The fact that the original
Documenting Data Use in a Model of Pandemic “Emotional Contagion” …
445
mechanisms of ECT cannot function in the same way in the spread of emotion on online (non-face-to-face) networks is sometimes glossed over in this literature. Given the differences in the mechanisms at work in these two kinds of interaction (offline and online), we plan to have the simulated agents in our ABM be situated within both types of network (guided by distinct behavioral and interaction rules). The EmotiCon ABM also includes mechanisms informed by social identity theory (SIT) and identity fusion theory (IFT), both of which shed light on the ways in which a person’s sense of identity can affect their motivation. SIT hypothesizes that individuals within different social groups attempt to differentiate themselves from each other as a result of pressures to evaluate their own group positively through in-group/out-group comparisons [25]. These value laden social differentiations can ratchet up tension between groups, which can impact people’s motivation to protect their group. IFT postulates that motivation toward extreme behaviors is enhanced when a person’s sense of their group becomes functionally equivalent to their sense of self [26]. Less fused individuals may have strong beliefs about how one “ought” to act (e.g., in defense of one’s group) but highly fused people are more willing to actually engage in extreme behaviors in defense of the group (e.g., stigmatizing behaviors or sharing conspiracy theories related to the out-group). Our surveys included validated measures of both SIT and IFT, and we plan to include these as variables in our simulated agents.
4 Operationalization 3.1 and 3.2 What data element(s) did you include for implementing each key model element in the model’s scope? and Are these data elements implemented with the help of qualitative or quantitative data or further models? Agent characteristic at initiation
Used data
Data scale
Source
Socio-demographic profiles
Gender, age group, region of Norway
Quantitative categorized
Population statistics (consistent with the data used for computing weights for the EmotiCon surveys)
Susceptibility to emotional contagion
Scale of emotional contagion susceptibility
Quantitative
EmotiCon surveys
COVID-19 related anxiety
Single question
Quantitative
EmotiCon surveys
Identity fusion with the nation
Single question
Quantitative
EmotiCon surveys
Willingness to follow dugnad
Single question
Quantitative
EmotiCon surveys (continued)
446
P. Antosz et al.
(continued) Agent characteristic at initiation
Used data
Data scale
Source
Supported political party Single question
Categorical
EmotiCon surveys
Relevant motives for behaviour
Qualitative
Social media analysis
Twitter content
3.3 Explain how data affected the way you implemented each model element and why Our choice of data elements was informed by our interest in modeling agent attitudes, behaviours and interactions in a way that simulated the spread of misinformation, anxiety, and stigma in the target reality (Norway). 3.4 What are the data elements used for in the modelling process: specification, calibration, validation, other? Using data drawn from Twitter, we can address aspects of calibration of motives and networks. Given that the retweet, reply, and tagging data are available for every tweet in the Twitter database, we can map the interactions between individuals as an interaction network. This interaction network can be used to create a 1:1 initialization network for the ABM. Alternatively, parameters of this network, such as density, clustering, and average shortest path lengths can be used to parameterize larger, national-level networks as approximate starting points for the ABM. In addition, the Natural language Processing (NLP) data generated from analysis of the tweets can provide useful data for sub-model validation: specifically in validating patterns of online activity in diverse socio-demographic groups. The NLP and the survey analysis, along with subject matter expertise (SME) in social networks, emotional contagion, and public health communication, are being used for specifying the model elements and interactions. The calibration of the model will rely more on SME interpretation of the NLP. Will the simulated agents in the ABM behave as expected and will their interactions lead to plausible macro-level outcomes? The validation of the model will rely more on the survey data. Ideally, we will be able to validate simulation experiments by comparing them to the actual changes in the Norwegian population from October 2020 to April 2021. 3.5 Why for this use and not another one? This data is easily available and the infrastructure for gathering the data existed prior to the pandemic, therefore, it represented a ready-made dataset that can not only address issues of network initialization and emotion, it also is a dynamic dataset that can be useful in validation and/or parameterization of the model in multiple stages of research. 3.6 Did required data exist? Twitter: yes, however it needed preparation and analysis. Survey: primary data.
Documenting Data Use in a Model of Pandemic “Emotional Contagion” …
447
3.7 If it existed, did you use it? Yes. 3.8 If you did not use it, why not? All data that could be found quickly were used. 3.9 For the existing data you used, provide details (a description) about data sources, sampling strategy, sample size, and collection period. For the data you collected, provide details about how it was collected, sampling strategy, sample size, and collection period Twitter: From August to October 2020, we collected Twitter data using the Twitter streaming API with the following hashtags: "#koronaviruset", "#koronavirus", "#coronavirusnorge", "#karantene", "#koronaNorge", "#covid19norway", "#holdavstand", "covid19sweden", "#smittestopp", and "#coronakrisen". These hashtags are considered relevant for emotional contagion in Scandinavia in the wake of the COVID-19 pandemic. Due to the assumed linguistic uniqueness of the tweets (all being Scandinavian languages), we did not limit the API to any geographic region or language. The final data set comprised 26,727 unique tweets. All tweets that included at least one of the featured hashtags should have been returned by the API. All tweets returned by the API were included in the dataset. Upon review of the dataset, we found that there were non-Scandinavian languages that were utilizing similar hashtags and were also returned by the Twitter API. Because Twitter algorithms utilize hashtags and trending topics as key aspects to the newsfeed that are not limited by language (in fact some users already have access to automatic translation), we included all tweets scraped from the API that were found to include our target hashtags in our analysis. This is because it is possible for any of those tweets to have an effect on other users searching for their key hashtag. Survey: The survey data was collected in collaboration with Kantar using their Gallup Panel. The panel consists of about 46,000 people who regularly respond to surveys. The Gallup Panel is put together for representativeness, and the goal is for the Gallup Panel to be a miniature Norway that reflects the entire country’s population. In our survey 1200 people were interviewed in October–November 2020 and in April 2021. The margin of error (with full 50–50 distribution) is about ± 2,8%. The two data points gave us the opportunity to analyze the data longitudinally. The respondents had an unique ID number allowing us to track them. 763 respondents answered both surveys. The sample data was weighted using age, gender, and geography as weighting variables. The survey was developed using measures validated in previous research on the key dimensions in the survey such as anxiety, stigma, misinformation, conspiracy, personality traits, threat assessment, religiosity, and fusion index. In addition to this Kantar provided us with about 20 demographic variables from the Gallup Panel. A limitation of the survey is that it only reaches the part of the Norwegian population with access to the internet. In 2019 this was 98% of the adult population.
448
P. Antosz et al.
3.10 Justify your data collection decisions from 3.9 The period of the pandemic, the duration of the project. While Twitter data is still being collected, the initial Twitter dataset was isolated between August and October 2020. This dating is largely arbitrary but it falls between periods of lockdowns and approvals of vaccines. As such we believe that it is justifiably representative of the COVID-19 conversation in our key area. 3.11 If you needed to analyse or transform/manipulate the data before including them in the model (regardless if you collected data yourself or you used existing data), what did you do and why did you choose this specific approach? Twitter: Twitter data was analysed using ALAN Analytics’ Pythia cultural-ontology (ALAN Analytics s.r.o, 2020). The Pythia platform combines advanced AI analytics with NLP to structure data in a way that is relevant to creating psychographic profiles of individuals in the dataset. This, in turn, allows for the creation of social digital twins. These digital twins can be further developed as multi-agent AI models to understand the rhetoric of online social networks. Pythia employs a unique term and phrase ontology that currently understands over 30,000 terms and phrases and works in over 40 languages. It rates texts based on their prevalence or relationship to over 50 different socio-cultural and psychological dimensions and can create personality and moral profiles based on unstructured text data. Tweets were analysed for moral dimensions, as well as their readability, biological themes, and gender and temporal focus, and users were also profiled based on their Big-5 personality (or “OCEAN”) factors. Tweets were also subjected to classifier systems trained to detect misinformation about COVID-19, threats, and hate speech. The output of the analysis is the probability that an individual will view the tweet as misinformation or hate speech, and the probability that the individual is currently experiencing high or low levels of social, predation, natural, or contagion threats. In addition, we also created an “engagement score” for each tweet, which is the sum of the tweet’s favourites, replies, and retweets (the three primary actions for engaging with information on the social network). Survey: Either no or minimal data preparation was needed.
5 Conclusion and Next Steps Using Twitter content in agent-based model building has been increasing in popularity. For a number of agent-based models Twitter is the represented target system [27]. Other models [28], just like ours, acknowledge that online tools (e.g., Twitter) are indeed important communication channels that provide individuals with opportunities to exchange information alongside other, perhaps more traditional, channels (e.g., face-to-face interactions, newspapers). Irrespective of whether Twitter reality is the whole or a part of the target system, major concerns regarding representativeness of Twitter contents remain when the data is used in informing agent-based models.
Documenting Data Use in a Model of Pandemic “Emotional Contagion” …
449
In this section, we would like to reflect on two specific types of representativeness, i.e., how well a Twitter analysis represents (1) the investigated topic and (2) the communication on Twitter on that topic. The most common technique for sampling tweets is the topic-based approach [29] used also in our study. In such hashtag-centred studies, representativeness of tweets for the investigated topic depends on the choice of search terms that are tracked by the API user, on the one hand, and the popularity of those hashtags with relation to all other Twitter content, on the other. In reality, both are difficult to estimate. Validity of the search terms depends on the existence and early identification of widelyadopted hashtags. Given the thematic context of our study (i.e., the SARS-COV-19 pandemic), the implemented research design and urgency to collect relevant data that otherwise would be lost (esp. the fact that Twitter data collection was executed before the agent-based model was conceptualized), we decided to select a broad scope of COVID-19 related hashtags. Only some of those hashtags directly referenced the local measures against the spread of the pandemic (physically distancing and quarantining). In the time period between August and October 2020, all of our chosen hashtags might have been sufficient to represent the topic of following dugnad. As the pandemic evolved, and the local measures changed, the choice of search terms should be adjusted to maintain the sensitivity of these indicators. Popularity of the hashtags has a complex relationship with thematic representativeness. On the one hand, popularity of the hashtags indicates validity. In a perfect world where unlimited, free access to Twitter for scientific purposes was granted, it would be possible to collect the entire population of tweets containing a hashtag. In the world we live in, probably for justifiable reasons, access to Twitter content is limited by the Twitter API to a maximum of 1% of the total current Twitter volume. Therefore, when search terms are very popular, Twitter API returns a non-exhaustive sample of tweets. In practice, this necessitates a limit in the number of selected hashtags and selection of only the most relevant for the investigated topic. Hashtag-centred tweet collection raises even more serious concerns with respect to representativeness for Twitter communication. Hashtags represent a self-selection of tweets (and users). Therefore, the collected data is missing out on an unknown volume of content which may relate to the same issues, but does not contain any relevant text markers [30]. Random sample of tweets with the use of Streaming API is a viable alternative to hashtag-centered sampling. Even though it is free of self-selection bias, it faces other significant limitations [29]. For the time being, hashtag-centric studies seem to be better suited for tracking changes over time. The last reflection we wanted to make has to do with our experience using the RAT-RS standard to report the use of data in the ABM. Overall, we found the RAT-RS standard intuitive and easy to use. It is important to highlight that we used the standard at the stage of model conceptualization, when our data was already collected but not yet analysed. In other words, we knew what data we had, but did not yet know what information it carried. Our ambition was to incorporate the knowledge we had about available data into model conceptualization alongside the theoretical assumptions guiding the model mechanics. In this particular context, data availability considered early in the process posed constraints to ideas for concept development. We treated
450
P. Antosz et al.
RAT-RS as a tool that gave us a good overview of where certain elements could fit in. We had the experience of zooming out and identifying the role of larger chunks of a puzzle in a more harmonious whole. This is particularly useful in a relatively complex model such as ours. However, once the data is analyzed, more detailed and method-tailored descriptions will be necessary to aid in effective communication and replicability.
References 1. Achter, S., Borit, M., Chattoe-Brown, E., Siebers, P.-O.: RAT-RS: a reporting standard for improving the documentation of data use in agent-based modelling. Int. J. Soc. Res. Methodol. in press 2. Hatfield, E., Cacioppo, J.T., Rapson, R.L.: Emotional contagion. Curr. Dir. Psychol. Sci. 2(3), 96–100 (1993). https://doi.org/10.1111/1467-8721.ep10770953. Jun 3. Bosse, T., Duell, R., Memon, Z.A., Treur, J., van der Wal, C.N.: Agent-based modeling of emotion contagion in groups. Cogn. Comput. 7(1), 111–136 (2015) 4. Xiong, X., et al.: An emotional contagion model for heterogeneous social media with multiple behaviors. Physica A 490, 185–202 (2018) 5. Dignum, F., et al.: Analysing the combined health, social and economic impacts of the corovanvirus pandemic using agent-based social simulation. Mind. Mach. 30(2), 177–194 (2020) 6. Currie, C.S., et al.: How simulation modelling can help reduce the impact of COVID-19. J. Simul. 14(2), 83–97 (2020) 7. Edmonds, B., et al.: Different modelling purposes. JASSS 22(3), 6 (2019) 8. Lorentzen, H., Dugstad, L.: Den norske dugnaden. Historie, kultur og fellesskap. Høgskole Forlaget (2011) 9. Antosz, P., Jager, W., Polhill, G.: Simulation model implementing different relevant layers of social innovation, human choice behaviour and habitual structures. SMARTEES Deliverable (2019) 10. Shults, F.L., Lane, J.E., Diallo, S., Lynch, C., Wildman, W.J., Gore, R.: Modeling terror management theory: computer simulations of the impact of mortality salience on religiosity. Relig Brain Behav. 8(1), 77–100 (2018) 11. Shults, F.L., Gore, R., Wildman, W.J., Lynch, C., Lane, J.E., Toft, M.: A generative model of the mutual escalation of anxiety between religious groups. J. Artif. Societ. Soc. Simul. 21(4). https://doi.org/10.18564/jasss.3840 (2018) 12. Lane, J.: Understanding Religion Through Artificial Intelligence. Bonding and Belief. Bloomsbury Academic (2021) 13. Festinger, L.: Reflections on cognitive dissonance: 30 years later, in Cognitive Dissonance: Progress on a pivotal theory in social psychology. In Harmon-Jones, E., Mills, J. (Eds.) American Psychological Association, Washington (1999) 14. Harmon-Jones, E., Harmon-Jones, C.: Testing the action-based model of cognitive dissonance: the effect of action orientation on postdecisional attitudes. Pers. Soc. Psychol. Bull. 28(6), 711–723 (2002) 15. McGrath, A.: Dealing with dissonance: a review of cognitive dissonance reduction. Soc. Personal. Psychol. Compass 11(12), e12362 (2017) 16. Brehm, J.W., Wright, R.A., Solomon, S., Silka, L., Greenberg, J.: Perceived difficulty, energization, and the magnitude of goal valence. J. Exp. Soc. Psychol. 19(1), 21–48 (1983) 17. Brehm, J.W., Self, E.A.: The intensity of motivation. Annu. Rev. Psychol. 40(1), 109–131 (1989)
Documenting Data Use in a Model of Pandemic “Emotional Contagion” …
451
18. Harmon-Jones, C., Harmon-Jones, E.: Toward an increased understanding of dissonance processes: a response to the target article by Kruglanski et al. Psychol. Inq. 29(2), 74–81 (2018) 19. Hatfield, E., Bensman, L., Thornton, P.D., Rapson, R.L.: New perspectives on emotional contagion: a review of classic and recent research on facial mimicry and contagion (2014) 20. Doherty, R.W.: The emotional contagion scale: a measure of individual differences. J. Nonverbal Behav. 21(2), 131–154 (1997) 21. Hatfield, E., Cacioppo, J. T., Rapson, R.L.: Emotional Contagion. Cambridge England ; New York : Paris: Cambridge University Press (1993) 22. Coviello, L. et al.: Detecting emotional contagion in massive social networks. PLoS One 9(3) (2014). https://doi.org/10.1371/journal.pone.0090315 23. Goldenberg, A., Gross, J. J.: Digital emotion contagion. Trends in Cognitive Sciences (2020) 24. Zeng, R., Zhu, D.: A model and simulation of the emotional contagion of netizens in the process of rumor refutation. Sci. Rep. 9(1), 1–15 (2019) 25. Tajfel, H., Ed., Social Identity and Intergroup Relations, Reissue edition. Cambridge University Press (2010) 26. Swann, W.B., Buhrmester, M.D.: Identity fusion. Curr. Dir. Psychol. Sci. 24(1), 52–57 (2015). https://doi.org/10.1177/0963721414551363 27. Serrano, E., Iglesias, C. Á., Garijo, M.: A novel agent-based rumor spreading model in twitter, in WWW ’15 Companion: Proceedings of the 24th International Conference on World Wide Web, pp. 811–814 (2015). https://doi.org/10.1145/2740908.2742466 28. Romenskyy, M., Spaiser, V., Ihle, T., Lobaskin, V.: Polarized Ukraine 2014: opinion and territorial split demonstrated with the bounded confidence XY model, parametrized by Twitter data. R. Soc. open sci. 5, 171935. https://doi.org/10.1098/rsos.171935 29. Gerlitz, C., Rieder, B.: Mining one percent of Twitter: collections, baselines, sampling. M/C J. 16(2) (2013) 30. Bruns, A., Axel, Stieglitz, S.: Twitter data: what do they represent? IT – Inf. Technol. 56(5), 240–245 (2014). https://doi.org/10.1515/itit-2014-1049
A Methodology to Develop Agent-Based Models for Policy Design in Socio-Technical Systems Based on Qualitative Inquiry Vittorio Nespeca, Tina Comes, and Frances Brazier
Abstract Agent-based models (ABM) for policy design need to be grounded in empirical data. While many ABMs rely on quantitative data such as surveys, much empirical research in the social sciences is based on qualitative research methods such as interviews or observations that are hard to translate into a set of quantitative rules, leading to a gap in the phenomena that ABM can explain. As such, there is a lack of a clear methodology to systematically develop ABMs for policy design on the basis of qualitative empirical research. In this paper, a two-stage methodology is proposed that takes an exploratory approach to the development of ABMs in sociotechnical systems based on qualitative data. First, a conceptual framework centered on a particular policy design problem is developed based on empirical insights from one or more case studies. Second, the framework is used to guide the development of an ABM. This step is sensitive to the purpose of the model, which can be theoretical or empirical. The proposed methodology is illustrated by an application for disaster information management in Jakarta, resulting in an empirical descriptive ABM. Keywords Qualitative research · Agent-based modelling · Exploratory research · Disaster information management
1 Introduction Agent-based models (ABM) are powerful tools to support policy design in sociotechnical systems by explaining the collective consequences of individual choices and behaviour. ABM can be used for a range of applications especially in policy design and analysis [10, 12]. However, designing ABM of socio-technical systems that reflect empirical evidence remains a challenge [25, 31]. Qualitative methods allow to account for the contextual richness of case studies. This richness is especially important in socio-technical systems, characterized by a dynamic complexity that normally hinders understanding cause-effect relations V. Nespeca (B) · T. Comes · F. Brazier Delft University of Technology, Jaffalaan 5, 2628 Delft, BX, The Netherlands e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8_34
453
454
V. Nespeca et al.
[31]. Qualitative data is typically in textual form, and is obtained from fieldwork, interviews, participant observations, or documents [31]. Translating this nuancerich qualitative data into quantitative simulation code is not a straightforward task [19]. Bridging such a gap between qualitative data and ABM requires preserving the contextual richness of the data collected, avoiding distortions, providing transparency in the chain of evidence from data to model, and ensuring replicability [8, 31]. Several methodologies have been proposed to bridge this gap by (a) using previously developed frameworks and/or by (b) “constraining” the knowledge elicitation process through clear steps [8]. For instance, [17] shows the potential of using conceptual frameworks developed for institutional (re)design to support the design, implementation and analysis of ABM in socio-technical systems. Further, [16] provides an approach for structuring and interpreting qualitative data from ethnographic work on the basis of a previously developed framework (or metamodel). Conversely, [3] suggest a mixed-methods research methodology that puts emphasis on the steps adopted to extract and validate agent rules via a participatory and ethnographic process. The authors rely on an exploratory phase to design a context-specific game that captures the world views and decisions of the participants. The game is then used to extract agent rules. Despite these advances in the field, a methodology is missing that allows to (i) explicitly design a conceptual framework centered on a particular policy design problem and (ii) use such a framework in the development, implementation and analysis of ABMs thus contributing to the policy design process. In this paper, a methodology is presented addressing this gap. The methodology focuses on the development of ABMs for policy design in socio-technical systems based on qualitative methods and more specifically exploratory case-study research. It accounts for the contextual richness of a case study and is sensitive to the modelling purpose (theoretical or empirical).
2 Proposed Methodology The methodology introduced in this article involves two interlinked phases: framework and model development, see Fig. 1. In phase one, a framework is developed based on existing knowledge and one or more case studies. In phase two, a model is developed based on the framework, along with insights from literature and empirical data from the case studies.
2.1 Phase 1: Framework Development In this phase, an exploratory approach is used to design a conceptual framework. The framework has to be designed from the perspective of a carefully-chosen unit of analysis. This unit refers to the micro-level entity that is going to the center of
A Methodology to Develop Agent-Based Models for Policy …
455
Fig. 1 Methodology for developing ABMs grounded in qualitative research. The dashed line symbolizes optional activities
the ABM. Given the generative nature of ABM [11], it is crucial that the framework takes the perspective of the intended model’s most elementary unit. Examples of unit of analysis are a person, a household, or an organization. The framework provides the means to analyze (a) the system’s configuration and change, and (b) the system’s behaviour. To allow such analyses the framework includes respectively the (a) system’s characteristics and their attributes, together with the relationships among such characteristics, and (b) the desired system’s behaviour expressed in terms of specific criteria for assessment. The analysis of system’s configuration is carried out by studying the characteristics and attributes, resulting in the units of analysis, objects they interact with, and other relevant entities that compose the considered system (including the environment) and their attributes. The analysis of system’s change is carried out by studying the relationships among characteristics and it provides the activities and interactions across different units of analysis and the objects. The analysis of system’s behaviour consists in using criteria for assessment to analyze the extent to which the current system achievers the desired system’s behaviour. Designing policies within the given system means altering the system’s configuration to achieve the desired behaviour. Framework Development Steps: Given that socio-technical systems are inherently complex, Brazier et al.’s approach for the design of complex systems is adopted for the framework development [5]. The framework is developed according in four steps (adapted from [23]): exploratory literature review, requirements design, case study, framework design. 1. Exploratory literature review: The researchers explore the literature and existing models (including ABMs) related to the type of system in question in order to identify (a) the type of problem to be addressed via policy design, (b) the unit of analysis for the given type of problem, and (c) a list of relevant system’s features from the perspective of the unit of analysis. 2. Requirements design: Brazier et al.’s approach entails the design of the system’s mission and of the associated functional, behavioural and structural requirements [5]. The mission of the system is its intended purpose. The functional requirements are the functions that the system has to fulfill in order to achieve the mission. Behavioural requirements define the desired system behaviour associated with the
456
V. Nespeca et al.
fulfilment of the functional requirements, and the assessment criteria that can be used for measuring the extent to which the desired systems behaviour is achieved. Structural requirements are the components of the system, including those put in place in order to fulfill the behavioural requirements. Based on the problem identified from literature, the researchers set a system’s mission and design the preliminary functional and behavioural requirements (or desired behaviour), resulting in a preliminary list of criteria for assessment. The researchers also design the structural requirements based on the list of relevant system’s features from the perspective of the unit of analysis, resulting in a preliminary list of system’s characteristics, attributes and relationships. 3. Case study: In this step, the preliminary list of characteristics, attributes, relationships and criteria for assessment are verified and expanded based on a case study. First, the field study is designed. This includes the selection of a case study (e.g. [15]), data collection techniques (interviews and focus groups, participant observations and archival data), and sampling strategies (e.g. [22]) all of which are summarized in a data collection plan. The collected data is then analyzed through coding. The way such analysis is carried out depends on the type of data collection techniques chosen. However, in all cases the analysis begins with the preliminary characteristics, attributes, relationships and criteria for assessment. In the case of interviews, focus groups, and participant observations the collected data is analyzed with a hybrid deductive and inductive coding approach [13]. Initially, a coding schema is defined based on the preliminary characteristics, attributes, relationships and criteria for assessment from the previous step. More specifically, codes of the first level are defined as (a) the preliminary system characteristics and (b) the system’s behaviour. The codes of the second level for (a) are defined as the attributes and relationships, whereas the codes of the second level for (b) are defined as the criteria for assessment. During the coding process, not only instances of the pre-defined codes are found, but also an open (inductive) coding approach is adopted to find new system characteristics, attributes, relationships, desired system’s behaviour and criteria for assessment. In the case of archival data or documents, the summative content analysis approach is adopted [18]. This approach is divided in two levels, namely manifest and latent. The manifest level entails finding in the archival data occurrences of the codes associated with the preliminary characteristics, attributes, relationships and criteria for assessment. At this stage, new characteristics, attributes, relationships and criteria for assessment may also be found through open coding. Next, the latent level focuses on analyzing the context in which the code occurrences were found to study and revise their meaning. In the process, further instances of the codes may be found, and also new codes may be introduced. Typically, an iterative process is required between the manifest and latent levels to determine how well the meaning extrapolated from given contexts fits that associated with the codes and solve potential conflicts. 4. Framework design: The design process of the framework is based on the system characteristics, attributes, relationships, and criteria for assessment from the previous step. Each system characteristic is considered as an independent framework component, with its own attributes and relationships. When the relationships found
A Methodology to Develop Agent-Based Models for Policy …
457
between the system’s characteristics are vertical, such as those of the type “is a part of”, “can have one or more” or “contains”, then the corresponding characteristics are organized hierarchically, i.e. as a box in a box. Whereas, when the relationships among given characteristics are horizontal, such as those of the type “interacts with”, “causes”, “perform” and “affect” then these characteristics are organized as a box besides a box and linked with an arrow labeled with the corresponding relationship. The behavioural requirements are used to capture the systems behaviour through the criteria for assessment.
2.2 Phase 2: Model Development Previous work suggests that it is good practice to set a clear modelling purpose since the early stages of model design as the way a model is developed, justified and also scrutinized by the scientific community depends on its purpose [4, 10]. Therefore, the model development process suggested in this article takes different forms depending on the purpose of the model. More specifically, a distinction is drawn between models with an empirical or theoretical purpose1,2 affecting the way the framework is used in the development process. Model Development Steps: Several methodologies have been proposed in the literature for the development of ABMs. In this article, the approach proposed in [24] is extended to include the use of the framework from the previous phase to guide the model development process. The resulting approach involves the following iterative model development steps: Problem Formulation, System Identification and Composition, Model Concept Formalization, Model Narrative Development, Software Implementation, and Model Evaluation. In the following sections, each step is described stressing how the framework is used to guide model development for empirical and theoretical models. 1. Problem Formulation: The problem formulation entails decisions about (a) the modelling purpose and (b) the system behaviour of interest and the associated criteria for assessment.3 In the case of empirical models, the choice of a modelling purpose and system behaviour of interest is guided by the results of the framework application to the case study and resulting analysis of system configuration and change, and analysis of system’s behaviour (see Sect. 2.1). Empirical models can be employed to provide 1
Empirical models have a direct relationships with a specific case study. Descriptions, explanations and predictions are examples of empirical modelling purposes. Theoretical models do not have a direct relationship with any case study. Illustrations and theoretical expositions are examples of theoretical modelling purposes [10]. 2 With this distinction, the authors do not imply that theoretical models cannot be used in practical settings. However, theoretical model can be applied in practice only if their micro assumptions and macro implications have been empirically tested [14]. 3 New criteria or more detailed criteria may be introduced at this stage compared to those presented in the framework.
458
V. Nespeca et al.
a description of the current configuration and dynamics of a given system on the basis of the analysis of system configuration and change. This can be a first step for the development of future models aimed at supporting policy design for the given case ([30] is an example of a description). In other cases, the analysis of system behaviour uncovers that the system performs poorly or particularly well in terms of specific criteria for assessment. As such, the researcher may decide to focus on providing explanations in terms of the mechanisms that led to such system behaviour (see for instance [1]). Finally, empirical models may be chosen with the purpose of exploring the implications of future policy interventions e.g. aimed at addressing the poor performance uncovered by the analysis of system behaviour [9]. In other cases, the researchers may wish to develop a theoretical model that abstracts from the context of the given case study to capture a range of systems [4]. Such models can for instance have the purpose of illustrating or exploring relationships between given system characteristics or policies and the resulting system behaviour, and producing hypotheses to be tested empirically [2]. In these cases, the researcher may choose a modelling purpose within the broader scope of the mission of the framework. The relevant system’s behaviour depends on the modelling purpose chosen. The framework, and more specifically its assessment criteria, can support the definition of relevant categories of systems behaviour. The researcher in this case has to decide which of such criteria are relevant for the specific modelling purpose. 2. System Identification and Decomposition: System identification involves defining the boundaries of the considered system. System decomposition consists in listing the instances of the units of analysis, their actions and interactions, objects they interact with and the environment they are in. In the case of empirical models, the boundaries of the system can be those of the case study to which the framework was applied. However, the researcher may decide to narrow the considered system down to a specific area. The system decomposition is derived from the analysis of configuration and change obtained through the framework application (see framework application in Sect. 2.1). With regards to theoretical models, the system identification and decomposition is meant to capture an abstract system, rather than a specific case study. The framework can support system decomposition by providing an inventory of system characteristics, attributes and relationships of which the researcher may decide to introduce instances in the considered abstract system. Previously existing models and ontologies can be used for the same purpose. 3. Concept Formalization: In this step, the system identification and decomposition is formalized in a format that can be translated into software. All the entities that will become agents are organized hierarchically from more general classes to those representing the agents actually considered in the model. A concept formalization can be implemented directly as software data structure or as an ontology (which is then translated into a software data structure). 4. Developing a Model Narrative: At this stage, all activities the agents carry out including their interactions with other agents and with the environment are organized into a narrative. One way of proceeding is to develop a narrative starting from general
A Methodology to Develop Agent-Based Models for Policy …
459
classes that capture similar activities for many agents and then extend the general narrative to include the details for each type of agent. The detailed narrative is then formalized as pseudo code. 5. Software Implementation: In this step, the model conceptualization and narrative are implemented in an adequate modelling environment such as NetLogo, Repast Symphony or GAMA. 6. Model Evaluation: Model evaluation is an activity that occurs throughout the development of a model. Evaluation can take different forms including verification and validation. Verification focuses on assessing whether the model corresponds to the intentions of the modeller. Validation is concerned with evaluating if the model corresponds to the reality it aims to capture [24]. Depending on the modelling purpose, validation and verification assume a different relative importance [10]. Theoretical models are not directly connected to a particular case study. As such, there is a stronger focus on verification rather than validation. Conversely, empirical models aim to capture a given case study, and therefore typically require a stronger emphasis on validation. Descriptive empirical models do not aim to reproduce the system behaviour but only to combine knowledge gathered through the case study with previously available knowledge and models. Therefore, such models require solely a validation in terms of their model conceptualization and narrative. Other empirical models that aim at reproducing the system behaviour need to be validated not only in terms of the model conceptualization and narrative, but also with regards to their ability to reproduce system behaviour. In such cases, the results of the analysis of system behaviour from the framework application (see Sect. 2.1) can be used here as the output to be matched by the model.
2.3 Iterative Model and Framework Development Due to the exploratory nature of the proposed methodology, the model development phase is likely to produce new knowledge regarding relevant systems characteristics, attributes, relationships or system behaviour that are not included in the framework designed in phase one. As such, this knowledge can be incorporated back in the framework for future use.
3 Methodology Application: A Case Study on Disaster Information Management in Jakarta In this section, a case study of disaster Information Management (IM) in Jakarta, Indonesia is used to illustrate the use of the methodology. The following sections provide information on the case study, show how a framework was designed based on
460
V. Nespeca et al.
the case study (phase 1), and how this framework was used to develop an empirical model (phase 2).
3.1 Case Study When disasters such as floods and storms hit, both formal and informal organizations and communities need to adapt to the ever-changing and often unexpected conditions [7]. Their ability to self-organize, coordinate and respond to the situation strongly relies on the timeliness and quality of the information available [23, 29]. At the same time, with the dynamically evolving situation, the roles and information needs of the actors continually change [20, 27]. Designing for coordination and self-organization in disasters thus mandates the design of strong IM policies to ensure that information of good quality reaches the actors who need it when they need it. Such policies need to take into account the socio-technical nature of disaster response systems, as the way information is collectively managed often depends on the interplay between human behaviour and the use of technology (e.g. mobiles and social media) [26]. The authors in this case were specifically interested in the design of bottom-up IM policies. Jakarta represents a critical case of bottom-up disaster IM as (a) it is affected by very frequent flooding and (b) because of such floods, many bottom-up IM initiatives have been initiated in the city in the recent years, often aided by social media and messaging apps [28]. Another reason for choosing Jakarta was that at the time of data collection (in 2018) many international organizations were in the city due to the humanitarian response to the Sulawesi Earthquake. This provided the opportunity to interview their representatives in person. The data collection plan was designed on the basis of an exploratory interview carried out before visiting the field. However, further participants were found through snowballing during the data collection. In total, 9 semi-structured interviews and 3 focus groups were carried out in the field. Altogether, 25 participants were involved in the data collection, ranging from the information managers of national and international governmental and nongovernmental organizations, to the members of highly affected communities in the city.4 More information on the case study, including data collection and analysis can be found in [23].
3.2 Phase 1: Framework Development Firstly, a review of the relevant literature and existing ABMs on disaster IM was carried out. This lead to the identification of the policy design problem as the design of IM policies that can support both coordination and self-organization in disaster 4
Two of the most affected communities in the city were considered: Marunda and Kampung Melayu.
A Methodology to Develop Agent-Based Models for Policy …
461
Fig. 2 Framework, adapted from [23]
response by satisfying the continually shifting information needs of individual actors. As such, the unit of analysis chosen for the framework is that of an individual person (or actor). A list of relevant system’s features was also derived from the current literature and existing ABMs.5 Secondly, based on these results, the system’s mission was identified as: “to provide relevant, reliable and verifiable information to the actors who need it, when they need it in an accessible manner”. Together with the mission, also the functional, behavioural and structural requirements were designed. Thirdly, these requirements were validated and expanded with a case study, leading to the refinement of some the behavioural requirements (relevance and timeliness). Next, based on the requirements a framework was designed with the twofold purpose of (a) providing the means to analyze the current practice of disaster IM in a case study, including the way such practice changes through self-organized bottomup processes (analysis of system’s configuration and change), and (b) analyzing the extent to which the current practice supports coordination and self-organization (analysis of system’s behaviour). The criteria for the assessment of the extent to which the system’s mission and desired system behaviour are achieved are information relevance, timeliness, accessibility, reliability, verifiability and load. Figure 2 shows the resulting framework. The full details of framework development process are in [23].
3.3 Phase 2: Model Development In this phase, the framework was used to develop an empirical model. The following sections explain the model development process in detail. 1. Problem Formulation: A model with a descriptive purpose was chosen to capture some of the main characteristics and dynamics of the current practice of disaster IM in Marunda. This model is the first step in building a simulation environment that 5
For instance, the conceptualization of the environment in a crisis as a series of cascading shocks producing information needs was introduced as in [21].
462
V. Nespeca et al.
Fig. 3 System Identification and Decomposition: configuration of the current practice of Disaster IM in Marunda, Jakarta. Adapted from [23]
will be used to explain the current behaviour of the system given its current practice (possibly informing policy design) and to evaluate the impact of different disaster IM policies on the system behaviour. In terms of relevant system’s behaviour, information relevance and timeliness were chosen as the assessment criteria to be captured in the model output based on the results of the analysis of system’s behaviour [23]. 2. System Identification and Decomposition : The purpose of the model is to capture the dynamics within the community, and its interactions with other relevant actors. As such, the system boundary includes the Marunda community, as well as the governmental and non-governmental organizations and groups that (may) exchange with the community. The system decomposition was carried out by applying the framework designed in the previous phase, and more specifically by carrying out the analysis of system’s configuration and change (cf. Sect. 2.1).This meant identifying the key actors and roles they assume, groups they belong to, structures and networks through which they share information, activities they carry out, and the environmental factors that play a role in IM. During the data analysis, one system characteristics was found that had not been included in the framework, namely that of objects. Objects are any non-human entities that can support IM and coordination activities of the actors (cf. Fig. 2). Figure 3 shows the resulting system identification and decomposition for the Marunda community. 3. Concept Formalization: Based on the system decomposition, a list of the relevant entities was developed, including their properties, states, and activities. When such entities had common states, properties or activities and could therefore be seen as instances of a more abstract entity, a new entity was introduced. This led to the definition of abstract entities such as “Actors” or “Objects”. Figure 4 shows the resulting conceptualization including both the general entities and their instances for the specific case of Marunda. 4. Developing a Model Narrative: Starting from the general conceptualisation from the previous step, a narrative was developed by organizing the activities carried out by the general “Actor” into a sequence of actions (see Fig. 4), resulting in the
A Methodology to Develop Agent-Based Models for Policy …
463
Fig. 4 Concept Formalization: UML description of entities, their properties, states and activities
Fig. 5 Developing a Model Narrative: narrative of disaster IM for the general actor (first three rows) and its instances (following rows)
first three rows of the flow chart shown in Fig. 5. Next, specific sub-narratives were developed for each of the instances of “Actor” given their specific activities and interactions. Such sub-narratives were then introduced in the general actor’s one, resulting in the “swim lanes” diagram shown in Fig. 5. The same process was carried out for “Object”.6 5. Software Implementation: Implemented in NetLogo 6.1.1. 6. Model Evaluation: The empirical model was verified thoroughly through single agent, interaction and multi-agent testing as suggested in [24]. A validation was not performed at this stage and will be carried out as a future step. Given its descriptive purpose, the model developed is not intended to reproduce precisely the system behaviour observed in reality. The goal is rather to formalize and combine the knowledge gathered through an exploratory study for the Marunda community together with previously available theory and models on crisis IM. As such, the validation in this case will not focus on assessing whether the model reproduces the behaviour observed at the macro level in the Marunda community via the analysis of system’s behaviour (see [23]). It aims at ensuring that the model conceptualization 6
Omitted in this article for brevity.
464
V. Nespeca et al.
and narrative match the way information is managed at the micro level in the case study. This will be achieved by discussing the model conceptualization and narrative with the members of the Marunda community and other organizations captured in the model.
3.4 Iterative Framework Development In the model development phase a new system characteristic was found, namely that of “Objects”, connecting groups and enabling certain activities. Objects and their relationships with the other system’s characteristics were integrated in the framework. The resulting framework is shown in Fig. 6 and it can be used in substitution of the one developed in [23] (Fig. 2).
4 Discussion This article fills a gap in the literature by proposing a methodology for the development of ABMs for policy design in socio-technical systems. It allows to capture the contextual richness of a case study based on qualitative data and exploratory research and translate it into an ABM. The methodology builds on [24] to include the design and use of frameworks in the development of ABMs with different modelling purposes [10] and it is structured in two phases. In the first phase, a conceptual framework centred on a specific policy problem is designed based on a case study. Such a framework includes the system’s mission, its relevant characteristics and relationships, and the criteria for assessment (or indicators of system behaviour) that can capture the extent to which the mission is achieved by a given policy. In the second phase, the framework is used to guide the development of an ABM. The way the framework is used depends on whether the model is directly related to a case
Fig. 6 Updated framework with the new systems characteristics and relationships (highlighted in the figure)
A Methodology to Develop Agent-Based Models for Policy …
465
study (empirical) or not (theoretical). In the case of empirical models the framework is applied to a case study to analyze the system configuration and change, and the system’s behaviour. Based on these analyses, a context specific model is developed. In the case of theoretical models, the framework provides an inventory of (a) systems characteristics, attributes and relationships of which the researchers may decide to introduce instances in the considered abstract system and (b) a list of criteria for assessment as indicators of the system’s behaviour that the researchers may wish to study. This methodology was illustrated through a case study of disaster IM in Jakarta. A conceptual framework centered on the design of disaster IM policies was designed based on the available literature and case study interviews. The framework enabled the analysis of (a) system’s configuration and change, and (b) system’s behaviour. These analyses were instrumental in the development of a descriptive ABM capturing the current practice of disaster IM in the Marunda community. During model development new systems characteristics and relationships were found. These were integrated in the framework for future use. The case study showed how the methodology allows to collect context-rich qualitative data and translate it into an ABM through a systematic and transparent process. The developed model has a descriptive purpose and as such it is only meant to capture and formalize the knowledge gathered on the current practice of disaster IM in the case study. While such a model does not aim to reproduce precisely the behaviour of the considered system, it paves the way for the development of further models which purpose could be (a) investigating explanations for the current behaviour of the system (possibly providing suggestions for policy design) and (b) exploring the impact of IM policies [10]. A key advantage of this methodology is that it centers the process of framework and ABM development on a particular policy problem. Specifically, the framework provides a common mission and criteria for the assessment of policy “performance”, so that different socio-technical simulation studies focusing on the same policy design problem can be compared and built upon incrementally. Further, the process of designing policies supported by ABM may involve the development and use of a series of models with different purposes [10]. For instance, in this study an empirical ABM with a descriptive purpose is developed, based on which other empirical models e.g. with explanatory or exploratory purposes could be developed to support policy design. Theoretical models may also be needed e.g. with the purpose of illustrating the implications of a given policy designs in an abstract system prior to testing them empirically. As such, another advantage of the proposed methodology is its versatility in the development of ABMs for policy design with different (theoretical or empirical) purposes. Despite these advantages, the methodology presents limitations providing ground for further research. Specifically, while the framework enables the analysis of a system and its decomposition, it is not meant to provide the level of detail required to capture the agents’ internal processes. As such, the design of internal processes rests upon the model designers. While the authors believe that ABM design requires the skill and experience of the model developer, at least one direction to bring further
466
V. Nespeca et al.
structure and rigour to the development process is envisioned. Generic models such as the Generic Agent Model and its applications [6] can guide the design of the agents’ internal processes as they provide an abstract and formalized understanding of the tasks that (can) occur within the agent. Such generic models could be integrated into the methodology to (a) guide the translation of the system decomposition into a model conceptualization and narrative and (b) aid the implementation into code.
5 Conclusions Agent-based models for policy support in socio-technical systems need to be empirically grounded. Qualitative research methodologies allow to gather empirical evidence capturing the contextual richness of a case study. But, translating evidence of such into an ABM for policy design requires a systematic and transparent approach. This article introduced a methodology addressing this gap in two phases. Firstly, a conceptual framework is designed tailored to a specific policy design problem. Secondly, an ABM for policy design is developed through the framework. The use of the methodology was shown with a case study of disaster information management in Jakarta, Indonesia. As a result, a descriptive model of the current practice of disaster information management in the case study was developed. This is a first step in the development of further models that can support the design of information management policies in the case study. The proposed methodology has the advantage of centering ABM development on a common framework thus allowing for comparability and incremental design across different studies focusing on the same policy problem. Further, the methodology is versatile as it enables the development of models with different theoretical and empirical purposes to support the policy design process. Future research will focus on integrating the use of Generic Models in the methodology to improve its rigour by guiding the translation of a system decomposition into a model conceptualization and narrative.
References 1. Adam, C., Gaudou, B.: Modelling human behaviours in disasters from interviews: application to Melbourne bushfires. JASSS 20(3), 12 (2017) 2. Altay, N., Pal, R.: Information diffusion among agents: implications for humanitarian operations. POMs 23(6), 1015–1027 (2013) 3. Bharwani, S., Coll Besa, M., Taylor, R., Fischer, M., Devisscher, T., Kenfack, C.: Identifying salient drivers of livelihood decision-making in the forest communities of Cameroon: Adding value to social simulation models. JASSS 18(1), 3 (2015) 4. Boero, R., Squazzoni, F.: Does empirical embeddedness matter? Methodological issues on agent-based models for analytical social science. JASSS 8(4) (2005) 5. Brazier, F., Langen, P.v., Lukosch, S., Vingerhoeds, R.: Complex systems: design, engineering, governance. In: Projects and People: Mastering Success. NAP - Process Industry Network, pp. 35–60 (2018)
A Methodology to Develop Agent-Based Models for Policy …
467
6. Brazier, F.M., Jonker, C.M., Treur, J.: Compositional design and reuse of a generic agent model. Appl. Artif. Intell. 14(5), 491–538 (2000) 7. Comes, T., Van de Walle, B., Van Wassenhove, L.: The coordination-information bubble in humanitarian response: theoretical foundations and empirical investigations. POMs 29(11), 2484–2507 (2020) 8. Edmonds, B.: Using qualitative evidence to inform the specification of agent-based models. JASSS 18(1), 18 (2015) 9. Edmonds, B., ní Aodha, L.: Using agent-based modelling to inform policy – what could possibly go wrong? In:Davidsson, P., Verhagen. H. (eds.) Multi-Agent-Based Simulation XIX, Lecture Notes in Computer Science, pp. 1–16. Springer International Publishing, Cham (2019) 10. Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., MontañolaSales, C., Ormerod, P., Root, H., Squazzoni, F.: Different modelling purposes. JASSS 22(3), 6 (2019) 11. Epstein, J.M.: Agent-based computational models and generative social science. Complexity 4(5), 41–60 (1999) 12. Epstein, J.M.: Why model? JASSS 11(4), 12 (2008) 13. Fereday, J., Muir-Cochrane, E.: Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int. J. Qual. Methods 5(1), 80–92 (2006) 14. Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., Lorenz, J.: Models of social influence: towards the next frontiers. JASSS 20(4) (2017) 15. Flyvbjerg, B.: Five misunderstandings about case-study research. Qual. Inq. 12(2), 219–245 (2006) 16. Ghorbani, A., Dijkema, G., Schrauwen, N.: Structuring qualitative data for agent-based modelling. JASSS 18(1), 2 (2015) 17. Ghorbani, A., Ligtvoet, A., Nikolic, I., Dijkema, G.: Using institutional frameworks to conceptualize agent-based models of socio-technical systems. In: Proceeding of the 2010 Workshop on Complex System Modeling and Simulation, vol. 3 18. Hsieh, H.F., Shannon, S.E.: Three approaches to qualitative content analysis. Qual. Health Res. 15(9), 1277–1288 (2005) 19. Janssen, M.A., Ostrom, E.: Empirically based, agent-based models. Ecol. Soc. 11(2) (2006) 20. Meesters, K., Nespeca, V., Comes, T.: Designing disaster information management systems 2.0: connecting communities and responders. In: ISCRAM (2019) 21. Meijering, J.: Information diffusion in complex emergencies: a model-based evaluation of information sharing strategies (2019) 22. Miles, M.B., Huberman, A.M.: Qualitative data analysis: an expanded sourcebook. sage (1994) 23. Nespeca, V., Comes, T., Meesters, K., Brazier, F.: Towards coordinated self-organization: an actor-centered framework for the design of disaster management information systems. In: IJDRR, p. 101887 (2020) 24. Nikolic, I., Ghorbani, A.: A method for developing agent-based models of socio-technical systems. In: 2011 ICNSC (2011) 25. Squazzoni, F., Polhill, J.G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, m., Borit, M., Verhagen, H., Giardini, F., Gilbert, N.: Computational models that matter during a global pandemic outbreak: a call to action. JASSS 23(2), 10 (2020) 26. Starbird, K., Palen, L.: “ voluntweeters” self-organizing by digital volunteers in times of crisis. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1071–1080 (2011) 27. Turoff, M., Chumer, M., de Walle, B.V., Yao, X.: The design of a dynamic emergency response management information system (DERMIS). JITTA 5(4), 3 (2004) 28. van Voorst, R.: Formal and informal flood governance in Jakarta, Indonesia. Habitat Int. 52, 5–10 (2016) 29. Van de Walle, B., Comes, T.: On the nature of information management in complex and natural disasters. Procedia Eng. 107, 403–411 (2015)
468
V. Nespeca et al.
30. Watts, J., Morss, R.E., Barton, C.M., Demuth, J.L.: Conceptualizing and implementing an agent-based model of information flow and decision making during hurricane threats. Environ. Model. Softw. 122, 104524 (2019) 31. Yang, L., Gilbert, N.: Getting away from numbers: using qualitative observation for agent-based modeling. Adv. Complex Syst. 11(02), 175–185 (2008)
Author Index
A Ahrweiler, Petra, 203 Alexandrov, Nik, 93 Angourakis, Andreas, 191 Antosz, Patrycja, 439
B Ballestrazzi, Francesco, 275 Bancel, Nina, 79 Batzke, Marlene C. L., 41 Ben-Elia, Eran, 175 Binder, Claudia R., 275 Bithell, Mike, 343, 367 Blanco-Fernández, Darío, 131 Bonnet, Marie-Paule, 79 Borit, Melania, 191 Bourgeois-Gironde, Sacha, 301 Brazier, Frances, 453 Braz, Luiz Fernando, 329
C Chapuis, Kevin, 79 Chattoe-Brown, Edmund, 367 Ciribini, Angelo L. C., 217 Cliff, Dave, 93 Comai, Sara, 217 Comes, Tina, 453 Couture, Stéphane, 423 Czupryna, Marcin, 301
D da Hora, Neriane, 79 Dametto, Diego, 65
Dignum, Frank, 409 Dolezal, Martin, 261
E Edmonds, Bruce, 367 Engel, Dominik, 315 Ernst, Andreas, 41 Evangelista-Vale, Jôine Cariele, 79
F Fölsch, Marco, 261 Figuero, Charlie, 93 Filatova, Tatiana, 145 Fouladvand, Javanshir, 393
G Geva, Sharon, 175 Ghorbani, Amineh, 393 Graham, Shawn, 191 Gruca, Jan, 203
H Heinisch, Reinhard, 261 Herget, Frederick, 203 Higi, Leonard, 65 Hofstede, Gert Jan, 3 Hunter, Elizabeth, 379
J Jensen, Maarten, 409
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. Czupryna and B. Kami´nski (eds.), Advances in Social Simulation, Springer Proceedings in Complexity, https://doi.org/10.1007/978-3-030-92843-8
469
470 K Kandaswamy, Subu, 247 Kelleher, John D., 379 Klaperski, Daniel, 65 Kleppmann, Benedikt, 203 Kramer, Mark R., 3 Kurahashi, Setsuya, 15 L Lamperti, Francesco, 145 Lane, Justin E., 439 Leitner, Stephan, 119, 131 Le Page, Christophe, 79
Author Index Reinwald, Patrick, 119 Roventini, Andrea, 145
S Sadou, Loic, 423 Schröder, Tobias, 65 Schwarzer, Judith, 315 Shin, Hyesop, 343 Shults, F. LeRon, 233, 289, 439 Sichman, Jaime Simão, 329 Simeone, Davide, 217 Stevenson, John C., 161 Szczepanska, Timo, 191
M McGarry, Bryony L., 379 Meegada, Sri Sailesh, 247 Melo, Gustavo, 79 Meyer, Ruth, 261 Michel, Antje, 65 Michelini, Gabriela, 65 Muñoz, David Cristobal, 3 Muñoz, Ivet Andès, 3 Mueller, Georg P., 355 Mwanjesa, Albert, 29
T Taberna, Alessandro, 145 Taillandier, Patrick, 423 Tauch, Anne, 65 Templeton, Anne, 53 Thomopoulos, Rallou, 423
N Nagai, Hideyuki, 15 Nelissen, Rob M. A., 3 Nespeca, Vittorio, 453 Neumann, Martin, 203 Nikolic, Igor, 393 Normann, Roger, 439
V Vanhée, Loïs, 409 Ventura, Silvia Mastrolembo, 217 Verhagen, Harko, 409 Verkerk, Deline, 393
P Pagani, Anna, 275 Popiolek, Roy, 65 Prinz, Andreas, 289 Puga-Gonzalez, Ivan, 233, 439 R Rausch, Alexandra, 131
U Ulusoy, Onuralp, 29
W Wall, Friederike, 105, 119 Wijermans, Nanda, 53
X Xanthopoulou, Themis Dimitra, 289
Y Yolum, Pınar, 29