131 66 65MB
English Pages 496 [490] Year 2024
Studies in Computational Intelligence 1144
Hocine Cherifi Luis M. Rocha Chantal Cherifi Murat Donduran Editors
Complex Networks & Their Applications XII Proceedings of The Twelfth International Conference on Complex Networks and their Applications: COMPLEX NETWORKS 2023, Volume 4
Studies in Computational Intelligence Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
1144
The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.
Hocine Cherifi · Luis M. Rocha · Chantal Cherifi · Murat Donduran Editors
Complex Networks & Their Applications XII Proceedings of The Twelfth International Conference on Complex Networks and their Applications: COMPLEX NETWORKS 2023, Volume 4
Editors Hocine Cherifi University of Burgundy Dijon Cedex, France Chantal Cherifi IUT Lumière - Université Lyon 2 University of Lyon Bron, France
Luis M. Rocha Thomas J. Watson College of Engineering and Applied Science Binghamton University Binghamton, NY, USA Murat Donduran Department of Economics Yildiz Technical University Istanbul, Türkiye
ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-031-53502-4 ISBN 978-3-031-53503-1 (eBook) https://doi.org/10.1007/978-3-031-53503-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
Dear Colleagues, Participants, and Readers, We present the 12th Complex Networks Conference proceedings with great pleasure and enthusiasm. Like its predecessors, this edition proves complex network research’s ever-growing significance and interdisciplinary nature. As we navigate the intricate web of connections that define our world, understanding complex systems, their emergent properties, and the underlying structures that govern them has become increasingly crucial. The Complex Networks Conference has established itself as a pivotal platform for researchers, scholars, and experts from various fields to converge, exchange ideas, and push the boundaries of knowledge in this captivating domain. Over the past twelve years, we have witnessed remarkable progress, breakthroughs, and paradigm shifts highlighting the dynamic and complex tapestry of networks surrounding us, from biological systems and social interactions to technological infrastructures and economic networks. This year’s conference brought together an exceptional cohort of experts, including our keynote speakers: • Michael Bronstein, University of Oxford, UK, enlightened us on “Physics-inspired Graph Neural Networks” • Kathleen Carley, Carnegie Mellon University, USA, explored “Coupling in High Dimensional Networks” • Manlio De Domenico, University of Padua, Italy, introduced “An Emerging Framework for the Functional Analysis of Complex Interconnected Systems” • Danai Koutra, University of Michigan, USA, shared insights on “Advances in Graph Neural Networks: Heterophily and Beyond” • Romualdo Pastor-Satorras, UPC, Spain, discussed “Opinion Depolarization in Interdependent Topics and the Effects of Heterogeneous Social Interactions” • Tao Zhou, USTC, China, engaged us in “Recent Debates in Link Prediction” These renowned experts addressed a spectrum of critical topics and the latest methodological advances, underscoring the continued expansion of this field into ever more domains. We were also fortunate to benefit from the expertise of our tutorial speakers on November 27, 2023: • Tiago de Paula Peixoto, CEU Vienna, Austria, guided “Network Inference and Reconstruction” • Maria Liakata, Queen Mary University of London, UK, led us through “Longitudinal language processing from user-generated content” We want to express our deepest gratitude to all the authors, presenters, reviewers, and attendees who have dedicated their time, expertise, and enthusiasm to make this event successful. The peer-review process, a cornerstone of scientific quality, ensures
vi
Preface
that the papers in these proceedings have undergone rigorous evaluation, resulting in high-quality contributions. We encourage you to explore the rich tapestry of knowledge and ideas as we dive into these four proceedings volumes. The papers presented here represent not only the diverse areas of research but also the collaborative and interdisciplinary spirit that defines the complex networks community. In closing, we extend our heartfelt thanks to the organizing committees and volunteers who have worked tirelessly to make this conference a reality. We hope these proceedings inspire future research, innovation, and collaboration, ultimately helping us better understand the world’s networks and their profound impacts on science, technology, and society. We hope that the pleasure you have reading these papers matches our enthusiasm for organizing the conference and assembling this collection of articles. Hocine Cherifi Luis M. Rocha Chantal Cherifi Murat Donduran
Organization and Committees
General Chairs Hocine Cherifi Luis M. Rocha
University of Burgundy, France Binghamton University, USA
Advisory Board Jon Crowcroft Raissa D’Souza Eugene Stanley Ben Y. Zhao
University of Cambridge, UK Univ. of California, Davis, USA Boston University, USA University of Chicago, USA
Program Chairs Chantal Cherifi Murat Donduran
University of Lyon, France Yildiz Technical University, Turkey
Lightning Chairs Konstantin Avrachenkov Mathieu Desroches Huijuan Wang
Inria Université Côte d’Azur, France Inria Université Côte d’Azur, France TU Delft, Netherlands
Poster Chairs Christophe Crespelle Manuel Marques Pita Laura Ricci
Université Côte d’Azur, France Universidade Lusófona, Portugal University of Pisa, Italy
viii
Organization and Committees
Special Issues Chair Sabrina Gaito
University of Milan, Italy
Publicity Chairs Fabian Braesemann Zachary Neal Xiangjie Kong
University of Oxford, UK Michigan State University, USA Dalian University of Technology, China
Tutorial Chairs Luca Maria Aiello Leto Peel
Nokia-Bell Labs, UK Maastricht University, Netherlands
Social Media Chair Brennan Klein
Northeastern University, USA
Sponsor Chairs Roberto Interdonato Christophe Cruz
CIRAD - UMR TETIS, France University of Burgundy, France
Sustainability Chair Madeleine Aurelle
City School International De Ferney-Voltaire, France
Local Committee Chair Charlie Joyez
Université Côte d’Azur, France
Organization and Committees
Publication Chair Matteo Zignani
University of Milan, Italy
Submission Chair Cheick Ba
Queen Mary University of London, UK
Web Chairs Stephany Rajeh Alessia Galdeman
Sorbonne University, France University of Milan, Italy
Program Committee Jacobo Aguirre Luca Maria Aiello Esra Akbas Sinan G. Aksoy Mehmet Aktas Tatsuya Akutsu Reka Albert Alberto Aleta Claudio Altafini Viviana Amati Frederic Amblard Enrico Amico Yuri Antonacci Alberto Antonioni Nino Antulov-Fantulin Mehrnaz Anvari David Aparicio Nuno Araujo Panos Argyrakis Oriol Artime Malbor Asllani Tomaso Aste Martin Atzmueller Konstantin Avrachenkov
Centro de Astrobiología (CAB), Spain ITU Copenhagen, Denmark Georgia State University, USA Pacific Northwest National Laboratory, USA Georgia State University, USA Kyoto University, Japan Pennsylvania State University, USA University of Zaragoza, Spain Linkoping University, Sweden University of Milano-Bicocca, Unknown Université Toulouse 1 Capitole, IRIT, France EPFL, Switzerland University of Palermo, Italy Carlos III University of Madrid, Spain ETH Zurich, Switzerland Fraunhofer SCAI, Germany Zendesk, Portugal Univ. de Lisboa, Portugal Aristotle University of Thessaloniki, Greece University of Barcelona, Spain Florida State University, USA University College London, UK Osnabrück University & DFKI, Germany Inria Sophia-Antipolis, France
ix
x
Organization and Committees
Giacomo Baggio Franco Bagnoli James Bagrow Yiguang Bai Sven Banisch Annalisa Barla Nikita Basov Anais Baudot Gareth J. Baxter Loredana Bellantuono Andras Benczur Rosa M. Benito Ginestra Bianconi Ofer Biham Romain Billot Livio Bioglio Hanjo D. Boekhout Anthony Bonato Anton Borg Cecile Bothorel Federico Botta Romain Bourqui Alexandre Bovet Dan Braha Ulrik Brandes Rion Brattig Correia Chico Camargo Gian Maria Campedelli M. Abdullah Canbaz Vincenza Carchiolo Dino Carpentras Giona Casiraghi Douglas Castilho Costanza Catalano Lucia Cavallaro Remy Cazabet Jianrui Chen Po-An Chen Xihui Chen Sang Chin Daniela Cialfi Giulio Cimini
University of Padova, Italy Università di Firenze, Italy University of Vermont, USA Xidian University, China Karlsruhe Institute of Technology, Germany Università degli Studi di Genova, Italy The University of Manchester, UK CNRS, AMU, France University of Aveiro, Portugal University of Bari Aldo Moro, Italy SZTAKI, Hungary Universidad Politécnica de Madrid, Spain Queen Mary University of London, UK The Hebrew University, Israel IMT Atlantique, France University of Turin, Italy Leiden University, Netherlands Toronto Metropolitan University, Canada Blekinge Institute of Technology, Sweden IMT Atlantique, France University of Exeter, UK University of Bordeaux, France University of Zurich, Switzerland New England Complex Systems Institute, USA ETH Zürich, Switzerland Instituto Gulbenkian de Ciência, Portugal University of Exeter, UK Fondazione Bruno Kessler, Italy University at Albany SUNY, USA DIEEI, Italy ETH Zürich, Switzerland ETH Zürich, Switzerland Federal Inst. of South of Minas Gerais, Brazil University of Florence, Italy Free University of Bozen/Bolzano, Italy University of Lyon, France Shaanxi Normal University, China National Yang Ming Chiao Tung Univ., Taiwan University of Luxembourg, Luxembourg Boston University, USA Institute for Complex Systems, Italy University of Rome Tor Vergata, Italy
Organization and Committees
Matteo Cinelli Salvatore Citraro Jonathan Clarke Richard Clegg Reuven Cohen Jean-Paul Comet Marco Coraggio Michele Coscia Christophe Crespelle Regino H. Criado Herrero Marcelo V. Cunha David Soriano-Paños Joern Davidsen Toby Davies Caterina De Bacco Pietro De Lellis Pasquale De Meo Domenico De Stefano Fabrizio De Vico Fallani Charo I. del Genio Robin Delabays Yong Deng Mathieu Desroches Carl P. Dettmann Zengru Di Riccardo Di Clemente Branco Di Fátima Alessandro Di Stefano Ming Dong Constantine Dovrolis Maximilien Dreveton Ahlem Drif Johan L. Dubbeldam Jordi Duch Cesar Ducruet Mohammed El Hassouni Frank Emmert-Streib Gunes Ercal Alejandro Espinosa-Rada Alexandre Evsukoff Mauro Faccin
Sapienza University of Rome, Italy University of Pisa, Italy Imperial College London, UK QMUL, UK Bar-Ilan University, Israel Université Côte d’Azur, France Scuola Superiore Meridionale, Italy ITU Copenhagen, Denmark Université Côte d’Azur, France Universidad Rey Juan Carlos, Spain Instituto Federal da Bahia, Brazil Instituto Gulbenkian de Ciência, Portugal University of Calgary, Canada University of Leeds, UK Max Planck Inst. for Intelligent Systems, Germany University of Naples Federico II, Italy University of Messina, Italy University of Trieste, Italy Inria-ICM, France Coventry University, UK HES-SO, Switzerland Univ. of Electronic Science and Tech., China Inria Centre at Université Côte d’Azur, France University of Bristol, UK Beijing Normal University, China Northeastern University London, UK University of Beira Interior (UBI), Portugal Teesside University, UK Central China Normal University, China Georgia Tech, USA EPFL, Switzerland University of Setif, Algeria Delft University of Technology, Netherlands Universitat Rovira i Virgili, Spain CNRS, France Mohammed V University in Rabat, Morocco Tampere University, Finland Southern Illinois University Edwardsville, USA ETH Zürich, Switzerland Universidade Federal do Rio de Janeiro, Brazil University of Bologna, Italy
xi
xii
Organization and Committees
Max Falkenberg Guilherme Ferraz de Arruda Andrea Flori Manuel Foerster Emma Fraxanet Morales Angelo Furno Sergio Gómez Sabrina Gaito José Manuel Galán Alessandro Galeazzi Lazaros K. Gallos Joao Gama Jianxi Gao David Garcia Floriana Gargiulo Michael T. Gastner Alexander Gates Alexandra M. Gerbasi Fakhteh Ghanbarnejad Cheol-Min Ghim Tommaso Gili Silvia Giordano Rosalba Giugno Kimberly Glass David Gleich Antonia Godoy Lorite Kwang-Il Goh Carlos Gracia Oscar M. Granados Michel Grossetti Guillaume Guerard Jean-Loup Guillaume Furkan Gursoy Philipp Hövel Meesoon Ha Bianca H. Habermann Chris Hankin Yukio Hayashi Marina Hennig
City University, UK CENTAI Institute, Italy Politecnico di Milano, Italy Bielefeld University, Germany Pompeu Fabra University, Spain LICIT-ECO7, France Universitat Rovira i Virgili, Spain Università degli Studi di Milano, Italy Universidad de Burgos, Spain Ca’ Foscari university of Venice, Italy Rutgers University, USA INESC TEC—LIAAD, Portugal Rensselaer Polytechnic Institute, USA University of Konstanz, Germany CNRS, France Singapore Institute of Technology, Singapore University of Virginia, USA Exeter Business School, UK Potsdam Inst. for Climate Impact Res., Germany Ulsan National Inst. of Science and Tech., South Korea IMT School for Advanced Studies Lucca, Italy Univ. of Applied Sciences of Southern Switzerland, Switzerland University of Verona, Italy Brigham and Women’s Hospital, USA Purdue University, USA UCL, UK Korea University, South Korea University of Zaragoza, Spain Universidad Jorge Tadeo Lozano, Colombia CNRS, France ESILV, France Université de la Rochelle, France Bogazici University, Turkey Saarland University, Germany Chosun University, South Korea AMU, CNRS, IBDM UMR 7288, France Imperial College London, UK JAIST, Japan Johannes Gutenberg University of Mainz, Germany
Organization and Committees
Takayuki Hiraoka Marion Hoffman Bernie Hogan Seok-Hee Hong Yujie Hu Flavio Iannelli Yuichi Ikeda Roberto Interdonato Antonio Iovanella Arkadiusz J˛edrzejewski Tao Jia Jiaojiao Jiang Di Jin Ivan Jokifá Charlie Joyez Bogumil Kami´nski Marton Karsai Eytan Katzav Mehmet Kaya Domokos Kelen Mohammad Khansari Jinseok Kim Pan-Jun Kim Maksim Kitsak Mikko Kivelä Brennan Klein Konstantin Klemm Xiangjie Kong Onerva Korhonen Miklós Krész Prosenjit Kundu Haewoon Kwak Richard La Josè Lages Renaud Lambiotte Aniello Lampo Jennifer Larson Paul J. Laurienti Anna T. Lawniczak Deok-Sun Lee Harlin Lee Juergen Lerner
xiii
Aalto University, Finland Institute for Advanced Study in Toulouse, France University of Oxford, UK University of Sydney, Australia University of Florida, USA UZH, Switzerland Kyoto University, Japan CIRAD, France Univ. degli Studi Internazionali di Roma, Italy CY Cergy Paris Université, France Southwest University, China UNSW Sydney, Australia University of Michigan, USA Technology University of Delft, Netherlands GREDEG, Université Côte d’Azur, France SGH Warsaw School of Economics, Poland Central European University, Austria Hebrew University of Jerusalem, Israel Firat University, Turkey SZTAKI, Hungary Sharif University of Technology, Iran University of Michigan, USA Hong Kong Baptist University, Hong Kong TU Delft, Netherlands Aalto University, Finland Northeastern University, UK IFISC (CSIC-UIB), Spain Zhejiang University of Technology, China University of Eastern Finland, Finland InnoRenew CoE, Slovenia DA-IICT, Gandhinagar, Gujarat, India Indiana University Bloomington, USA University of Maryland, USA Université de Franche-Comté, France University of Oxford, UK UC3M, Spain Vanderbilt University, USA Wake Forest, USA University of Guelph, Canada KIAS, South Korea Univ. of North Carolina at Chapel Hill, USA University of Konstanz, Germany
xiv
Organization and Committees
Lasse Leskelä Petri Leskinen Inmaculada Leyva Cong Li Longjie Li Ruiqi Li Xiangtao Li Hao Liao Fabrizio Lillo Giacomo Livan Giosue’ Lo Bosco Hao Long Juan Carlos Losada Laura Lotero Yang Lou Meilian Lu Maxime Lucas Lorenzo Lucchini Hanbaek Lyu Vince Lyzinski Morten Mørup Leonardo Maccari Matteo Magnani Maria Malek Giuseppe Mangioni Andrea Mannocci Rosario N. Mantegna Manuel Sebastian Mariani Radek Marik Daniele Marinazzo Andrea Marino Malvina Marku Antonio G. Marques Christoph Martin Samuel Martin-Gutierrez Cristina Masoller Rossana Mastrandrea John D. Matta Carolina Mattsson Fintan McGee Matus Medo
Aalto University, Finland Aalto University/SeCo, Finland Universidad Rey Juan Carlos, Spain Fudan University, China Lanzhou University, China Beijing Univ. of Chemical Technology, China Jilin University, China Shenzhen University, China Università di Bologna, Italy University of Pavia, Italy Università di Palermo, Italy Jiangxi Normal University, China Universidad Politécnica de Madrid, Spain Universidad Nacional de Colombia, Colombia National Yang Ming Chiao Tung Univ., Taiwan Beijing Univ. of Posts and Telecom., China CENTAI, Italy Bocconi University, Italy UW-Madison, USA University of Maryland, College Park, USA Technical University of Denmark, Denmark Ca’Foscari University of Venice, Italy Uppsala University, Sweden CY Cergy Paris University, France University of Catania, Italy CNR-ISTI, Italy University of Palermo, Italy University of Zurich, Switzerland CTU in Prague, Czech Republic Ghent University, Belgium University of Florence, Italy INSERM, CRCT, France King Juan Carlos University, Spain Hamburg University of Applied Sciences, Germany Complexity Science Hub Vienna, Austria Universitat Politecnica de Catalunya, Spain IMT School for Advanced Studies, Italy Southern Illinois Univ. Edwardsville, USA CENTAI Institute, Italy Luxembourg IST, Luxembourg University of Bern, Switzerland
Organization and Committees
Ronaldo Menezes Humphrey Mensah Anke Meyer-Baese Salvatore Micciche Letizia Milli Marija Mitrovic Andrzej Mizera Chiara Mocenni Roland Molontay Sifat Afroj Moon Alfredo Morales Andres Moreira Greg Morrison Igor Mozetic Sarah Muldoon Tsuyoshi Murata Jose Nacher Nishit Narang Filipi Nascimento Silva Muaz A. Niazi Peter Niemeyer Jordi Nin Rogier Noldus Masaki Ogura Andrea Omicini Gergely Palla Fragkiskos Papadopoulos Symeon Papadopoulos Alice Patania Leto Peel Hernane B. B. Pereira Josep Perelló Anthony Perez Juergen Pfeffer Carlo Piccardi Pietro Hiram Guzzi Yoann Pigné Bruno Pinaud Flavio L. Pinheiro Manuel Pita Clara Pizzuti
xv
University of Exeter, UK Epsilon Data Management, LLC, USA Florida State University, USA UNIPA DiFC, Italy University of Pisa, Italy Institute of Physics Belgrade, Serbia University of Warsaw, Poland University of Siena, Italy Budapest UTE, Hungary University of Virginia, USA MIT, USA UTFSM, Chile University of Houston, USA Jozef Stefan Institute, Slovenia State University of New York, Buffalo, USA Tokyo Institute of Technology, Japan Toho University, Japan NIT Delhi, India Indiana University, USA National Univ. of Science & Technology, Pakistan Leuphana University Lueneburg, Germany ESADE, Universitat Ramon Llull, Spain Ericsson, Netherlands Osaka University, Japan Università di Bologna, Italy Eötvös University, Hungary Cyprus University of Technology, Cyprus Centre for Research & Technology, Greece University of Vermont, USA Maastricht University, Netherlands Senai Cimatec, Brazil Universitat de Barcelona, Spain Université d’Orléans, France Technical University of Munich, Germany Politecnico di Milano, Italy Univ. Magna Gracia of Catanzaro, Italy Université Le Havre Normandie, France University of Bordeaux, France Universidade Nova de Lisboa, Portugal Universidade Lusófona, Portugal CNR-ICAR, Italy
xvi
Organization and Committees
Jan Platos Pawel Pralat Rafael Prieto-Curiel Daniele Proverbio Giulia Pullano Rami Puzis Christian Quadri Hamid R. Rabiee Filippo Radicchi Giancarlo Ragozini Juste Raimbault Sarah Rajtmajer Gesine D. Reinert Élisabeth Remy Xiao-Long Ren Laura Ricci Albano Rikani Luis M. Rocha Luis E. C. Rocha Fernando E. Rosas Giulio Rossetti Camille Roth Celine Rozenblat Giancarlo Ruffo Arnaud Sallaberry Hillel Sanhedrai Iraj Saniee Antonio Scala Michael T. Schaub Irene Sendiña-Nadal Mattia Sensi Ke-ke Shang Julian Sienkiewicz Per Sebastian Skardal Fiona Skerman Oskar Skibski Keith M. Smith Igor Smolyarenko Zbigniew Smoreda Annalisa Socievole Igor M. Sokolov
VSB - Technical University of Ostrava, Czech Republic Toronto Metropolitan University, Canada Complexity Science Hub, Austria University of Trento, Italy Georgetown University, USA Ben-Gurion University of the Negev, Israel Università degli Studi di Milano, Italy Sharif University of Technology, Iran Indiana University, USA University of Naples Federico II, Italy IGN-ENSG, France Penn State, USA University of Oxford, UK Institut de Mathématiques de Marseille, France Univ. of Electronic Science and Tech., China University of Pisa, Italy INSERM, France Binghamton University, USA Ghent University, Belgium Imperial College London, UK CNR-ISTI, Italy CNRS/CMB/EHESS, France UNIL, Switzerland Univ. degli Studi del Piemonte Orientale, Italy University of Montpellier, France Northeastern University, USA Bell Labs, Nokia, USA CNR Institute for Complex Systems, Italy RWTH Aachen University, Germany Universidad Rey Juan Carlos, Spain Politecnico di Torino, Italy Nanjing University, China Warsaw University of Technology, Poland Trinity College, Ireland Uppsala University, Sweden University of Warsaw, Poland University of Strathclyde, UK Brunel University, UK Orange Innovation, France ICAR-CNR, Italy Humboldt University Berlin, Germany
Organization and Committees
Albert Solé-Ribalta Sara Sottile Sucheta Soundarajan Jaya Sreevalsan-Nair Christoph Stadtfeld Clara Stegehuis Lovro Šubelj Xiaoqian Sun Michael Szell Boleslaw Szymanski Andrea Tagarelli Kazuhiro Takemoto Frank W. Takes Fabien Tarissan Laura Temime François Théberge Guy Theraulaz I-Hsien Ting Michele Tizzani Michele Tizzoni Olivier Togni Leo Torres Sho Tsugawa Francesco Tudisco Melvyn S. Tyloo Stephen M. Uzzo Lucas D. Valdez Pim Van der Hoorn Piet Van Mieghem Fabio Vanni Christian L. Vestergaard Tiphaine Viard Julian Vicens Blai Vidiella Pablo Villegas Maria Prosperina Vitale Pierpaolo Vivo Johannes Wachs Huijuan Wang Lei Wang Guanghui Wen Mateusz Wilinski
xvii
Universitat Oberta de Catalunya, Spain University of Trento, Italy Syracuse University, USA IIIT Bangalore, India ETH Zürich, Switzerland University of Twente, Netherlands University of Ljubljana, Slovenia Beihang University, China IT University of Copenhagen, Denmark Rensselaer Polytechnic Institute, USA University of Calabria, Italy Kyushu Institute of Technology, Japan Leiden University, Netherlands CNRS & ENS Paris-Saclay, France Cnam, France TIMC, France Université Paul Sabatier and CNRS, France National University of Kaohsiung, Taiwan ISI Foundation, Italy University of Trento, Italy University of Burgundy, France Northeastern University, USA University of Tsukuba, Japan The University of Edinburgh, UK Los Alamos National Lab, USA National Museum of Mathematics, USA IFIMAR-UNMdP, Argentina Eindhoven University of Technology, Netherlands Delft University of Technology, Netherlands University of Insubria, Italy Institut Pasteur, France Télécom Paris, France Eurecat, Spain CSIC, Spain Enrico Fermi Research Center (CREF), Italy University of Salerno, Italy King’s College London, UK Corvinus University of Budapest, Hungary Delft University of Technology, Netherlands Beihang University, China Southeast University, Nanjing, China Los Alamos National Laboratory, USA
xviii
Organization and Committees
Dirk Witthaut Bin Wu Mincheng Wu Tao Wu Haoxiang Xia Gaoxi Xiao Nenggang Xie Takahiro Yabe Kaicheng Yang Yian Yin Jean-Gabriel Young Irfan Yousuf Yongguang Yu Paolo Zeppini Shi Zhou Wei-Xing Zhou Eugenio Zimeo Lorenzo Zino Michal R. Zochowski Claudia Zucca
Forschungszentrum Jülich, Germany Beijing Univ. of Posts and Telecom., China Zhejiang University of Technology, China Chongqing Univ. of Posts and Telecom., China Dalian University of Technology, China Nanyang Technological University, Singapore Anhui University of Technology, China MIT, USA Northeastern University, USA Cornell University, USA University of Vermont, USA Univ. of Engineering and Technology, Pakistan Beijing Jiaotong University, China University Cote d’Azur, France University College London (UCL), UK East China Univ. of Science and Techno., China University of Sannio, Italy Politecnico di Torino, Italy University of Michigan, USA Tilburg University, Netherlands
Contents
Higher-Order Interactions Analyzing Temporal Influence of Burst Vertices in Growing Social Simplicial Complexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chikashi Takai, Masahito Kumano, and Masahiro Kimura
3
An Analytical Approximation of Simplicial Complex Distributions in Communication Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ke Shen and Mayank Kejriwal
16
A Dynamic Fitting Method for Hybrid Time-Delayed and Uncertain Internally-Coupled Complex Networks: From Kuramoto Model to Neural Mass Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhengyang Jin
27
Human Behavior An Adaptive Network Model for Learning and Bonding During a Varying in Rhythm Synchronous Joint Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yelyzaveta Mukeriia, Jan Treur, and Sophie Hendrikse
41
An Adaptive Network Model for the Emergence of Group Synchrony and Behavioral Adaptivity for Group Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francesco Mattera, Sophie C. F. Hendrikse, and Jan Treur
53
Too Overloaded to Use: An Adaptive Network Model of Information Overload During Smartphone App Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emerson Bracy, Henrik Lassila, and Jan Treur
67
Consumer Behaviour Timewise Dependencies Investigation by Means of Transition Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anton Kovantsev
80
An Adaptive Network Model for a Double Bias Perspective on Learning from Mistakes within Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mojgan Hosseini, Jan Treur, and Wioleta Kucharska
91
Identification of Writing Preferences in Wikipedia . . . . . . . . . . . . . . . . . . . . . . . . . 104 Jean-Baptiste Chaudron, Jean-Philippe Magué, and Denis Vigier
xx
Contents
Influence of Virtual Tipping and Collection Rate in Social Live Streaming Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Shintaro Ueki, Fujio Toriumi, and Toshiharu Sugawara Information Spreading in Social Media Algorithmic Amplification of Politics and Engagement Maximization on Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Paul Bouchaud Interpretable Cross-Platform Coordination Detection on Social Networks . . . . . . 143 Auriant Emeric and Chomel Victor Time-Dynamics of (Mis)Information Spread on Social Networks: A COVID-19 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Zafer Duzen, Mirela Riveni, and Mehmet S. Aktas A Comparative Analysis of Information Cascade Prediction Using Dynamic Heterogeneous and Homogeneous Graphs . . . . . . . . . . . . . . . . . . . . . . . . 168 Yiwen Wu, Kevin McAreavey, Weiru Liu, and Ryan McConville A Tale of Two Cities: Information Diffusion During Environmental Crises in Flint, Michigan and East Palestine, Ohio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Nicholas Rabb, Catherine Knox, Nitya Nadgir, and Shafiqul Islam Multilingual Hate Speech Detection Using Semi-supervised Generative Adversarial Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Khouloud Mnassri, Reza Farahbakhsh, and Noel Crespi Exploring the Power of Weak Ties on Serendipity in Recommender Systems . . . 205 Wissam Al Jurdi, Jacques Bou Abdo, Jacques Demerjian, and Abdallah Makhoul Infrastructure Networks An Interaction-Dependent Model for Probabilistic Cascading Failure . . . . . . . . . 219 Abdorasoul Ghasemi, Hermann de Meer, and Holger Kantz Detecting Critical Streets in Road Networks Based on Topological Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Masaki Saito, Masahito Kumano, and Masahiro Kimura
Contents
xxi
Transport Resilience and Adaptation to Climate Impacts – A Case Study on Agricultural Transport in Brazil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Guillaume L’Her, Amy Schweikert, Xavier Espinet, Lucas Eduardo Araújo de Melo, and Mark Deinert Incremental Versus Optimal Design of Water Distribution Networks - The Case of Tree Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Vivek Anand, Aleksandar Pramov, Stelios Vrachimis, Marios Polycarpou, and Constantine Dovrolis Social Networks Retweeting Twitter Hate Speech After Musk Acquisition . . . . . . . . . . . . . . . . . . . . 265 Trevor Auten and John Matta Unveiling the Privacy Risk: A Trade-Off Between User Behavior and Information Propagation in Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Giovanni Livraga, Artjoms Olzojevs, and Marco Viviani An Extended Uniform Placement of Alters on Spherical Surface (U-PASS) Method for Visualizing General Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Emily Chao-Hui Huang and Frederick Kin Hing Phoa The Friendship Paradox and Social Network Participation . . . . . . . . . . . . . . . . . . . 301 Ahmed Medhat and Shankar Iyer Dynamics of Toxic Behavior in the Covid-19 Vaccination Debate . . . . . . . . . . . . 316 Azza Bouleimen, Nicolò Pagan, Stefano Cresci, Aleksandra Urman, and Silvia Giordano Improved Change Detection in Longitudinal Social Network Measures Subject to Pattern-of-Life Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 L. Richard Carley and Kathleen M. Carley Uncovering Latent Influential Patterns and Interests on Twitter Using Contextual Focal Structure Analysis Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Mustafa Alassad, Nitin Agarwal, and Lotenna Nwana Not My Fault: Studying the Necessity of the User Classification & Employment of Fine-Level User-Based Moderation Interventions in Social Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Sara Nasirian, Gianluca Nogara, and Silvia Giordano Decentralized Networks Growth Analysis: Instance Dynamics on Mastodon . . . 366 Eduard Sabo, Mirela Riveni, and Dimka Karastoyanova
xxii
Contents
Better Hide Communities: Benchmarking Community Deception Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Valeria Fionda Crossbred Method: A New Method for Identifying Influential Spreaders from Directed Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Nilanjana Saha, Amrita Namtirtha, and Animesh Dutta Examining Toxicity’s Impact on Reddit Conversations . . . . . . . . . . . . . . . . . . . . . . 401 Niloofar Yousefi, Nahiyan Bin Noor, Billy Spann, and Nitin Agarwal Analyzing Blogs About Uyghur Discourse Using Topic Induced Hyperlink Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Ifeanyichukwu Umoga, Stella Mbila-Uma, Mustafa Alassad, and Nitin Agarwal Synchronization Global Synchronization Measure Applied to Brain Signals Data . . . . . . . . . . . . . . 427 Xhilda Dhamo, Eglantina Kalluçi, Gérard Dray, Coralie Reveille, Arnisa Sokoli, Stephane Perrey, Gregoire Bosselut, and Stefan Janaqi Synchronization Analysis and Verification for Complex Networked Systems Under Directed Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Shuyuan Zhang, Lei Wang, and Wei Wang Tolerance-Based Disruption-Tolerant Consensus in Directed Networks . . . . . . . . 449 Agathe Bouis, Christopher Lowe, Ruaridh Clark, and Malcolm Macdonald Higher-Order Temporal Network Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Mathieu Jung-Muller, Alberto Ceria, and Huijuan Wang Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Higher-Order Interactions
Analyzing Temporal Influence of Burst Vertices in Growing Social Simplicial Complexes Chikashi Takai, Masahito Kumano, and Masahiro Kimura(B) Faculty of Advanced Science and Technology, Ryukoku University, Otsu, Japan [email protected]
Abstract. Simplicial complexes provide a useful framework of higherorder networks that model co-occurrences and interactions among more than two elements, and are also equipped with a mathematical foundation in algebraic topology. In this paper, we investigate the temporal growth processes of simplicial complexes derived from human activities and communication on social media from a perspective of burst vertices. First, we empirically show that most of new simplices contain burst vertices, while for each new simplex containing a burst vertex, its vertices other than the corresponding burst vertex do not necessarily co-occur with the burst vertex itself within the not so distant past. We thus examine the problem of finding which burst vertex is contained in a new simplex from the occurrence history of burst vertices. In particular, we focus on analyzing the influence of the occurrence events of burst vertices in terms of time-decays. To this end, we propose a probabilistic model incorporating a log-normal-like time-decay factor and give its learning method. Using real social media datasets, we demonstrate the significance of the proposed model in terms of prediction performance, and uncover the time-decay effects of burst vertices in the occurrence of new simplices by applying the proposed model. Keywords: higher-order network evolution social media analysis · time-decay effect
1
· probabilistic model ·
Introduction
Analyzing dynamical properties of higher-order networks is an important research issue in network science since many real-world systems can be modeled through co-occurrences and interactions among more than two elements, and can evolve over time. Simplicial complexes are a typical framework of higher-order networks representing co-occurrences among any number of elements, and have a mathematical foundation in algebraic topology [5,14]. Moreover, this approach has been successfully applied to several problem domains such as brain organization [13], protein interaction [7] and social influence spreading [9]. Recently, social media services have provided a new platform for facilitating various human c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 3–15, 2024. https://doi.org/10.1007/978-3-031-53503-1_1
4
C. Takai et al.
activities and communication. In this paper, we investigate the temporal growth processes of simplicial complexes derived from users’ activities and communication on social media services. For 19 datasets from various domains, Benson et al. [3] explored the temporal growth processes of simplicial complexes in terms of simplicial closure, and revealed the fundamental difference between higher-order networks and traditional dyadic networks in the growth mechanism. However, except for special cases such as simplicial closure phenomena, few attempts have been made at systematically analyzing the growing process of a simplicial complex for realworld data since such an analysis can be computationally challenging for large data. It is known that many dynamics of event occurrences related to human activities exhibit a bursty property, which is characterized by intermittent switching between prominent high-activity intervals and periods of no or reduced activity [1,4,10,11,16]. Instead of simplicial closure, we thus focus on burst vertices in a growing simplicial complex derived from a social media service, and investigate its growth process from a perspective of burst vertices. To extract a set of burst vertices, we employ a Fisher’s exact test-based method, obtained through an appropriate extension of the conventional χ2 test-based method [16]. Using three social media datasets, we empirically show that most of the new simplices contain burst vertices, indicating that the new simplices are closely related to burst vertices. However, we also see that for each new simplex containing a burst vertex, its vertices other than the corresponding burst vertex do not necessarily co-occur with the burst vertex itself within the not so distant past, meaning that it is difficult to explain which vertices are included in the new simplex from the co-occurrence history of the burst vertex. In this paper, given a set of burst vertices for a growing simplicial complex, we investigate the problem of finding which burst vertex is contained in a new simplex from the occurrence history of the burst vertices. We in particular focus on examining the influence of the occurrence events of the burst vertices in terms of time-decays. To this end, we first introduce a basic model for the problem of burst vertex selection, which is naturally derived from the so-called preferential attachment mechanism [2], and propose a probabilistic model that incorporates a log-normal-like time-decay factor into the basic model. Also, we provide a stable learning method for the proposed model. Using the social media datasets described above, we evaluate the effectiveness of the proposed model compared with several baseline methods in terms of prediction performance. Moreover, we analyze the time-decay effects of the burst vertices in the occurrence of new simplices by applying the proposed model.
2
Related Work
First, this paper is related to the studies on modeling time-decay of influence in social media posting data. Since online-items posted on a social media service can gain their popularity through sharing-events such as retweets for a tweet
Analyzing Temporal Influence of Burst Vertices
5
on Twitter, much interest has recently been attracted to modeling the underlying event dynamics, where time-decay effects are appropriately incorporated. The temporal models developed were successfully applied to analyzing citation dynamics [15,17], retweet dynamics [20], information diffusion dynamics [8,21] and opinion dynamics [6]. As for modeling the influence of past events, they incorporate the time-decay effects in the forms of a non-homogeneous Poisson process [15,17] or a point process (e.g., Hawkes process) [6,8,20,21]. However, purely from a perspective of their stochastic nature, it should be intrinsically difficult to accurately predict the timing of event occurrence by them. The main purpose of this paper is not to accurately predict when a new simplex occurs, but to analyze the temporal influence of the past occurrence events of a burst vertex on a new simplex containing itself. To this end, unlike the continuoustime stochastic models described above, we devise a discrete probabilistic model based on a preferential attachment mechanism. Also, this paper is related to link prediction in networks. Much effort has been devoted to studying link prediction for traditional dyadic-networks [12]. However, only a few attempts have been made at higher-order link prediction for large real-world data since it can be computationally challenging. For example, by restricting a set of candidate higher-order links, Zhang et al. [19] presented Coordinated Matrix Minimization (CMM) based on an adjacency space, and empirically showed that CMM outperforms several baselines. As mentioned before, Benson et al. [3] examined higher-order link prediction in terms of simplicial closure to reduce a computational load of higher-order network analysis. On the other hand, Cencetti et al. [4] focused on a temporal hypergraph derived from human proximity interactions for five different social settings, and unlike link prediction, they examined the configuration transitions before and after the occurrences of 3-hyperedges. In this paper, instead of simplicial closure and configuration transition, we focus on burst vertices, and explore the problem of finding which burst vertex is contained in a new simplex from the occurrence history of burst vertices in a growing simplicial complex.
3
Preliminaries
We consider a set of events A that represents users’ activities or communication generated on a social media service within a time period J0 , and investigate the growing simplicial complex K derived from A, where we focus on the case of discrete-time observation, and employ the day as our unit of time in accordance with common human activities in our experiments (see Sect. 5.2 for the specific examples). Let V denote the set of vertices (i.e., 0-simplices) in K. We assume that a vertex v ∈ V represents an item or a user to be analyzed for the users’ activities or communication on the social media service, and each element (S, t) ∈ A indicates that an interaction event to be analyzed occurs exactly on vertex set S at time t, where S ⊂ V and t ∈ J0 . Note that A may contain multiple instances of the same element (S, t). For each non-negative integer n with n < |V |, a set
6
C. Takai et al.
of n + 1 distinct vertices σ = {v0 , v1 , . . . , vn } ⊂ V is referred to as an n-simplex generated in K at a time t ∈ J0 when there exists some (S, t) ∈ A such that σ ⊂ S, meaning that n + 1 vertices v0 , v1 , . . . , vn co-occur on the social media service at time t. Table 1. 2 × 2 contingency table for burst vertex detection. Time-period (t − Δt1 , t] Time-period (t − Δt1 − Δt2 , t − Δt1 ] Events including u
m1 (u; t)
m2 (u; t)
Events not including u m1 (u; t)
m2 (u; t)
To examine the growing nature of K, we assume that every simplex generated in K at a time t ∈ J0 is a simplex in K at time t+ for any t+ ∈ J0 with t+ ≥ t. A simplex τ in K is called a proper face of a simplex σ if τ σ. Based on the definition of simplicial complex [5,14], we also assume that every proper face of a simplex in K at a time t ∈ J0 is a simplex in K at time t+ ∈ J0 if t+ ≥ t. We say that a simplex generated in K at time t ∈ J0 is a new simplex in K at time t if it is not a proper face of a simplex in K at time t. In this paper, for a specified analysis time-period J with J ⊂ J0 , we aim to clarify the mechanism by which new simplices occur in K over J .
4
Proposed Method
As described before, many of event dynamics associated with human activities and communication involve bursts. In this paper, we focus on burst vertices, and investigate the occurrence mechanism of new simplices in growing simplicial complex K during analysis period J from a perspective of burst vertices. 4.1
Burst Vertices
There have been many studies on analysis of bursty phenomena found in nature (see [1,4,10,11,16]). Regarding those related to our problem, Swan and Allan [16] addressed the problem of efficiently extracting hot topics from a document stream that arrive in discrete batches, and provided a χ2 test-based method to detect the intervals in which a key-phrase is in bursty state. Since their method is widely used as a basic and simple approach (see [11]) and a Fisher’s exact test is generally more effective than a χ2 test, we extend their method and consider extracting burst vertices in K during J on the basis of Fisher’s exact test. For any t ∈ J , we consider a relatively short time-period (t − Δt1 , t] immediately before time t and a longer time-period (t − Δt1 − Δt2 , t − Δt1 ] immediately
Analyzing Temporal Influence of Burst Vertices
7
before time t − Δt1 such that both time-periods are included in J0 , where we assume that Δt1 > 0 is not large and Δt2 Δt1 .1 Then, we calculate m1 (u; t), m2 (u; t), m1 (u; t) and m2 (u; t) for any u ∈ V , Here, for the set of events belonging to A during (t − Δt1 , t], m1 (u; t) and m1 (u; t) are defined as the number of events including u and the number of events not including u, respectively. Also, for the set of events belonging to A during (t − Δt1 − Δt2 , t − Δt1 ], m2 (u; t) and m2 (u; t) are defined as the number of events including u and the number of events not including u, respectively. Note that r1 (u; t) = m1 (u; t)/{m1 (u; t) + m1 (u; t)} and r2 (u; t) = m2 (u; t)/{m1 (u; t) + m2 (u; t)} indicate the occurrence rates of u during (t − Δt1 , t] and (t − Δt1 − Δt2 , t − Δt1 ], respectively. Then, we say that vertex u is in a bursty state at time t if r1 (u; t) > r2 (u; t), and pair (u, t) is detected by Fisher’s exact test for the 2 × 2 contingency table shown in Table 1. Moreover, we refer to u ∈ V as a burst vertex in K during J if the vertex u has experienced being in a bursty state at least once within the time period J . Let U denote the set of all burst vertices in K during J . 4.2
Proposed Model
As described in Sect. 1, we will empirically show in Sect. 5.2 that most of new simplices in K over J contain burst vertices in U. Thus, we focus on each new nsimplex σ that is generated in K at time t ∈ J and that includes some burst vertex in U. We consider identifying the burst vertex included in σ based on the past occurrence histories of burst vertices in U before time t, HU (t) = {Hu (t)}u∈U , where each Hu (t) is the set of time-stamps s ∈ [t−Δt0 , t) at which events belonging to A and containing u occur. Here, Δt0 indicates the length of examining the past histories of burst vertices and satisfies [t − Δt0 , t) ⊂ J0 .2 For this problem of burst vertex selection, we devise a model for the probability P (u ∈ σ | HU (t)) that a burst vertex u ∈ U is selected for new simplex σ at time t given HU (t), and investigate the influence of the past occurrence events of burst vertices HU (t) in terms of time-decays. We first introduce a basic model for P (u ∈ σ | HU (t)) naturally derived from the preferential attachment mechanism [2]. Here, for a traditional growing dyadic network, this mechanism states that the probability of joining a new link to an existing node v is proportional to the degree of v, i.e., the number of links that v has obtained so far. Since |Hu (t)| is regarded as a type of higher-order degree of vertex u at time t − 1, the basic model can be defined as P (u ∈ σ | HU (t)) ∝ |Hu (t)| ,
(∀u ∈ U).
(1)
To analyze the influence of the past occurrence events of burst vertices HU (t) in terms of time-decays, we propose incorporating a time-decay factor ψ(t − s) into the basic model (see Eq. (1)) in the following way: P (u ∈ σ | HU (t)) ∝ ψ(t − s), (∀u ∈ U). (2) s∈Hu (t) 1 2
In our experiments, we set Δt1 to one week (seven days) and Δt2 to 2Δt1 . In our experiments, we set Δt0 to four weeks (28 days).
8
C. Takai et al.
Since it is pointed out that the time-decay phenomena observed in many of event dynamics related to human activities can be appropriately modeled by log-normal distributions [15,17], we propose employing a log-normal-like timedecay factor, c0 2 (∀x > 0), (3) exp −α(log x) , ψ(x) = x where c0 is a positive constant, and α > 0 is a time-decay parameter that is estimated from data. 4.3
Learning Method
For each positive integer n < |V |, let Dn be the set of new n-simplices that are generated in K during J and that contain some burst vertex in U. By employing the MAP estimation framework, we present a stable method of estimating the value of time-decay parameter α in the proposed model from Dn . Throughout the paper, we will use the following notation: A new simplex σ ∈ Dn is generated at time tσ ∈ J and contains subset Uσ of U. Note that σ can contain multiple burst vertices in U. By assuming a gamma prior since parameter α has a positive value, we maximize the objective function log P (u∗ ∈ σ | HU (tσ )) + (c1 − 1) log α − c2 α (4) L(α) = σ∈D n u∗ ∈Uσ
with respect to α, where c1 > 1 and c2 > 0 are hyper-parameters. We employ an EM-like interactive method to maximize L(α) as follows: Let α ¯ be an current ¯ estimate of α, and let ψ(x) denote the value of ψ(x) for α ¯ (see Eq. (3)). We define a Q-function Q(α; α) ¯ as ⎧ ⎨ ¯ σ − s∗ ) ψ(t ∗ Q(α; α) ¯ = ¯ σ − s) log ψ(tσ − s ) ⎩∗ ψ(t n ∗ ∗ s∈H (t ) σ u σ∈D u ∈Uσ s ∈Hu∗ (tσ ) ⎛ ⎞⎫ ⎬ − log ⎝ ψ(tσ − s)⎠ + (c1 − 1) log α − c2 α. (5) ⎭ u∈U s∈Hu (tσ )
From Jensen’s inequality and Eqs. (2), (4) and (5), we have L(α) − L(¯ α) ≥ Q(α; α) ¯ + C, where C is the constant that does not depend on α and satisfies Q(¯ α; α)+C ¯ = 0. It is proved from Eqs. (3) and (5) that Q(α; α) ¯ is a concave function of α. Thus, we can effectively and stably maximize Q(α; α) ¯ by solving ∂Q/∂α = 0 for α. Therefore, we can obtain a stable update formula for α. We omit the details due to the lack of space. We will further elaborate on our learning method in the longer version of the paper.
Analyzing Temporal Influence of Burst Vertices
5
9
Experiments
We investigated growing simplicial complexes derived from users’ activities and communication on real social media services. 5.1
Datasets
We employed three datasets obtained from three different kinds of social media services; Japanese cooking-recipe sharing service “Cookpad”,3 location-based social networking service (LBSN) “Foursquare” and question-and-answer service “Stack Overflow”. As for Cookpad, we consider posting a recipe as a user’s activity event, and investigate the temporal evolution of co-occurrences of main ingredients in recipes in the framework of growing simplicial complex K, where the set of vertices V consists of main ingredients,4 and each interaction event (S, t) ∈ A indicates that a recipe having S ⊂ V as a group of main ingredients was posted on day t. Thus, a new simplex generated on day t means a recipe having a combination of main ingredients that did not appear before the day t. We examined all recipes posted for the Dessert category during the period J0 from December 12, 2011 to January 10, 2014, and used the period J from January 4,
Fig. 1. Number of new n-simplices, |D0n | (1 ≤ n ≤ 5). Table 2. Results for |Dn | / |D0n | (1 ≤ n ≤ 5). n=1 n=2 n=3 n=4 n=5 Cookpad (Dessert) data
0.945 1.000 1.000 1.000 1.000
Foursquare (Kyoto) data
0.854 0.957 0.982 0.991 0.991
Stack Overflow (Threads) data 0.997 1.000 1.000 1.000 1.000
3 4
https://cookpad.com/. First, we removed general-purpose ingredients for Japanese food such as soy sauce, salt, sugar, water, edible oil, and so on. Furthermore, we extracted the ingredients appearing in five or more recipes.
10
C. Takai et al.
2013 to January 3, 2014 as an analysis period. We refer to this as the Cookpad dataset, Then, we had |V | = 241 and |A| = 1, 715, 596. As for Foursquare, if a user checks in to a group of points-of-interest (POIs) within a specific city for sightseeing on a given day, we consider it as an activity event of that user. We investigate the temporal evolution of such co-occurrences of POIs through growing simplicial complex K, where V consists of main POIs, and each interaction event (S, t) ∈ A indicates that a user checked in to a group of main POIs, S ⊂ V , on day t. Thus, a new n-simplex represents a new combination of n + 1 main POIs to which a user checked in on one day. We explored the check-in data in Kyoto, Japan during the period J0 from March 14, 2012 to February 9, 2014, which was extracted from the Global-scale Check-in Dataset5 provided by Yang et al. [18]. We employed the period J from February 2, 2013 to February 3, 2014 as an analysis period. Here, we considered a set of tiles in Kyoto by exploiting a tile-based map system of zoom level 17, and adopted the tiles having more than one check-in as main POIs. Note that such a tile is a square with a side of about 125 m. We refer to this as the Foursquare dataset. Then, we had |V | = 5, 457 and |A| = 519, 409. n Table 3. Results for |D+ | / |D0n | (1 ≤ n ≤ 5).
n=1 n=2 n=3 n=4 n=5 Cookpad (Dessert) data
0.203 0.173 0.140 0.198 0.310
Foursquare (Kyoto) data
0.079 0.115 0.149 0.178 0.194
Stack Overflow (Threads) data 0.105 0.133 0.175 0.294 0.300
Fig. 2. Evaluation results for the Cookpad dataset.
As for Stack Overflow, we view the creation of a question-and-answer thread as a form of communication event among users. In particular, we focus on threads where both the question and all answers were posted on the same day. and investigate the temporal evolution of co-occurrences of users in threads from the viewpoint of growing simplicial complex K, where V consists of main users, and each interaction event (S, t) ∈ A indicates that a group of main users S ⊂ 5
https://sites.google.com/site/yangdingqi/home/foursquare-dataset.
Analyzing Temporal Influence of Burst Vertices
11
V appeared in a thread on day t. Thus, a new n-simplex represents a new combination of n + 1 main users who appeared in one thread. We explored the Thread-Stack-Overflow Dataset6 [3] during the period J0 from December 12, 2011 to January 10, 2014, and employed the period J from January 4, 2013 to January 3, 2014 as an analysis period. For constructing the set V of main users to be analyzed, we adopted the users who posted questions or answers more than twice per week on average. We refer to this as the Stack Overflow dataset. Then, we had |V | = 824 and |A| = 2, 369, 572. 5.2
Empirical Data Analysis
For the three social media datasets described above, we investigated the basic properties of Dn , the set of new simplices containing burst vertices, for 1 ≤ n ≤ 5. Here, the number of burst vertices |U| extracted was 198, 1, 227 and 722 for the Cookpad, Foursquare and Stack Overflow datasets, respectively. Let D0n denote the set of all new n-simplices generated in K during J . Note that Dn ⊂ D0n . Figure 1 displays the number of new n-simplices |D0n | for the three datasets. We first examined how likely it was that a new n-simplex would contain a burst vertex. Table 2 shows the fraction of such new n-simplices, |Dn | / |D0n |. Next, we examined the occurrence frequency of a new simplex containing a n denote the set of new simplices burst vertex that is in the bursty state. Let D+ n ∗ σ ∈ D for which some u ∈ Uσ is in the bursty state at time tσ . Table 3 shows n | / |D0n |. From Tables 2 and 3, we see the fraction of such new n-simplices, |D+ n n n that |D | / |D0 | is very large although |D+ | / |D0n | is small. These results imply that most of new n-simplices contain a burst vertex even though they do not necessarily include a burst vertex that is in the bursty state.
Fig. 3. Evaluation results for the Foursquare dataset.
Fig. 4. Evaluation results for the Stack Overflow dataset. 6
https://www.cs.cornell.edu/∼arb/data/threads-stack-overflow/.
12
C. Takai et al.
On the other hand, for each new n-simplex σ ∈ Dn , we investigated whether there exists some u∗ ∈ Uσ such that an interaction event containing both vertices u∗ and v occurs within the past Δt0 = 28 days for any v ∈ σ \ {u∗ }. Let Dcn be the set of new n-simplices in Dn that satisfy the condition as above. Then, we observed that |Dcn | / |Dn | was very small and less than 0.05. Thus, we see that for a new simplex σ containing a burst vertex u∗ , its vertices other than u∗ do not necessarily co-occur with u∗ in the not so distant past. This result suggests that it is difficult to explain which vertices form the new simplex σ only from the past co-occurrence events with the burst vertex u∗ . In this paper, we focus on finding a burst vertex contained in σ from the past history HU (tσ ). 5.3
Evaluation of Proposed Model
We evaluate the proposed model by comparing it with its three typical alternatives and the basic model (see Sect. 4.2) in terms of prediction performance. Here, the three typical alternatives are obtained from the proposed model by only changing the time-decay factor ψ(x). The first alternative employs an exponential-like time-decay factor, ψ(x) = c0 exp(−αx), (∀x > 0), (see e.g., [11]). The second alternative employs a normal-like time-decay factor, ψ(x) = c0 exp(−αx2 ), (∀x > 0), which is regarded as an extension of the first one. The third alternative employs a power-law-like time-decay factor, ψ(x) = c0 x−α = c0 exp(−α log x), (∀x > 0), (see e.g., [1]). Here, α > 0 is a time-decay parameter and c0 is a positive constant. We estimated the parameter α in the same way as the proposed model (see Sect. 4.3). For all the four models including the proposed model, the hyper-parameters were set to c1 = 2 and c2 = 4.
Fig. 5. Results for time-decay parameter α estimated.
Fig. 6. Results of time-decay factor ψ(x) for the Foursquare dataset.
As an evaluation measure, we employed the prediction log-likelihood ratio P LR of each model (see Eq. (2)) against a crude baseline, 1 log P (u∗ ∈ σ | HU (tσ )) − log P LR = , (6) |U| n ∗ (σ,u )∈Dtest
Analyzing Temporal Influence of Burst Vertices
13
n where Dtest is the test set, i.e., the set of all the following pairs (σ, u∗ ) with u∗ ∈ σ: σ is a new n-simplex that is generated within the next one week of J and u∗ is a burst vertex in U. Here, 1/|U| indicates the value of P (u∗ ∈ σ | HU (tσ )) for the uniformly random selection model (i.e., the crude baseline). Note that P LR measures the relative prediction performance of each model against the random guessing in log-scale (see Eq. (6)). We evaluated the five models for the Cookpad, Foursquare and Stack Overflow datasets in terms of P LR. Figures 2, 3 and 4 show the results, where the cases of n = 2, 3, 4 are displayed. First, we observe that the basic model outperforms the random guessing, implying that examining the past event history HU (tσ ) can be effective for finding a burst vertex contained in a new n-simplex σ. Next, we see that the proposed model significantly outperforms all other models for all datasets. This demonstrates that it is important to appropriately model the influence of the past occurrence events of burst vertices in terms of timedecays and the log-normal-like time-decay factor is effective for this purpose.
5.4
Analysis of Temporal Influence
For the creation of ∀σ ∈ Dn , we analyzed the influence of the past occurrence events of burst vertices HU (tσ ) in terms of time-decays by applying the proposed model. Figure 5 shows the estimated values of the time-decay parameter α for n = 2, 3, 4, where the results for the Cookpad, Foursquare and Stack Overflow datasets are displayed. We observe that the magnitude relationship of α values among these three datasets varied according to the value of n. This implies that the time-decay property heavily depends on datasets and dimension n. Here, note that the time-decay factor ψ(x) of the proposed model (see Eq. (3)) was a monotonically decreasing function in x ≥ 1 for all cases. We also observe that the value of α distinctly varied depending on the value of n for the Foursquare dataset. Figure 6 shows the results of ψ(x) (x ≥ 1) for the Foursquare dataset, and indicates that there was a noticeable difference in the time-decay effect between the cases of n = 2 and n = 4, and the case of n = 3. Thus, we see that for the case of n = 3, the influence of the occurrence events of burst vertices tends to decrease more slowly as time proceeds. Here, we emphasize that our analysis method has provided novel insights into the impact of burst vertices on the creation of new simplices from the perspective of time-decay effects. These results demonstrate the effectiveness of the proposed method.
6
Conclusion
We explored the growth processes of simplicial complexes derived from users’ activities and communication on social media services from the perspective of burst vertices. We employed a conventional method based on Fisher’s exact test to extract a set of burst vertices in a growing simplicial complex. Using three social media datasets obtained from Cookpad, Foursquare and Stack Overflow, we first showed that most of the new simplices contain burst vertices, i.e., the
14
C. Takai et al.
new simplices are closely related to burst vertices. However, we also demonstrated that the co-occurrence history of a burst vertex cannot completely determine which vertices form a new simplex containing the burst vertex. We thus addressed the problem of finding which burst vertex is contained in a new simplex from the occurrence history of burst vertices. We focused on investigating the influence of the past occurrence events of burst vertices on the creation of a new simplex in terms of time-decays. To this end, we have proposed a probabilistic model that combines a log-normal-like time-decay factor with the preferential attachment mechanism, and presented its learning method. Using the social media datasets described above, we have showed that the proposed model is effective by comparing it with four baseline models in terms of prediction performance. Moreover, by applying the proposed model, we have revealed several interesting properties for the time-decay effects of burst vertices in the occurrence process of new simplices. Acknowledgements. This work was supported in part by JSPS KAKENHI Grant Number JP21K12152. The Cookpad dataset we used in this paper was provided by Cookpad Inc. and National Institute of Informatics.
References 1. Barab´ asi, A.L.: The origin of bursts and heavy tails in human dynamics. Nature 435, 207–211 (2005) 2. Barab´ asi, A.L., Albert, R.: Emergence of scaling in random networks. Science 286, 509–512 (1999) 3. Benson, A.R., Abebe, R., Schaub, M.T., Jadbabaie, A., Kleinberg, J.: Simplicial closure and higher-order link prediction. Proc. Natl. Acad. Sci. U.S.A. 115(48), E11221–E11230 (2019) 4. Cencetti, G., Battiston, F., Lepri, B., Karsai, M.: Temporal properties of higherorder interactions in social networks. Sci. Rep. 11(7028), 1–7028 (2021) 5. Courtney, O., Bianconi, G.: Generalized network structures: the configuration model and the canonical ensemble of simplicial complexes. Phys. Rev. E 93, 062311:1–062311:14 (2016) 6. De, A., Valera, I., Ganguly, N., Bhattacharya, S., Gomez-Rodriguez, M.: Learning and forecasting opinion dynamics in social networks. In: Proceedings of NIPS 2016, pp. 397–405 (2016) 7. Estrada, E., Ross, G.J.: Centralities in simplicial complexes. applications to protein interaction networks. J. Theoret. Biol. 438, 46–60 (2018) 8. Farajtabar, M., Du, N., Gomez-Rodriguez, M., Valera, I., Zha, H., Song, L.: Shaping social activity by incentivizing users. In: Proceedings of NIPS 2014, pp. 2474– 2482 (2014) 9. Iacopini, I., Petri, G., Barrat, A., Latora, V.: Simplicial models of social contagion. Nat. Commun. 9, 2485:1–1399:9 (2019) 10. Karsai, M., Kaski, K., Barab´ asi, A.L., Kert´esz, J.: Universal features of correlated bursty behaviour. Sci. Rep. 2, 397:1–397:7 (2012) 11. Kleinberg, J.: Bursty and hierarchical structure in streams. Data Min. Knowl. Disc. 7, 373–397 (2003)
Analyzing Temporal Influence of Burst Vertices
15
12. L¨ u, L., Zhou, T.: Link prediction in complex networks: a survey. Phys. A 390(6), 1150–1170 (2011) 13. Petri, G., et al.: Homological scaffolds of brain functional networks. J. Royal Soc. Interface 11, 20140873:1–20140873:10 (2014) 14. Preti, G., Moralest, G.D.F., Bonchi, F.: Strud: Truss decomposition of simplicial ccomplexes. In: Proceedings of WWW 2021, pp. 3408–3418 (2021) 15. Shen, H., Wang, D., Song, C., Barab´ asi, A.L.: Modeling and predicting popularity dynamics via reinforced poisson processes. In: Proceedings of AAAI 2014, pp. 291– 297 (2014) 16. Swan, R., Allan, J.: Automatic generation of overview timelines. In: Proceedings of SIGIR 2000, pp. 49–56 (2000) 17. Wang, D., Song, C., Barab´ asi, A.L.: Quantifying long-term scientific impact. Science 342(6154), 127–132 (2013) 18. Yang, D., Zhang, D., Qu, B.: Participatory cultural mapping based on collective behavior in location based social networks. ACM Trans. Intell. Syst. Technol. 7, 30:1–30:23 (2016) 19. Zhang, M., Cui, Z., Jiang, S., Chen, Y.: Beyond link prediction: predicting hyperlinks in adjacency space. In: Proceedings of AAAI 2018. pp. 4430–4437 (2018) 20. Zhao, Q., Erdogdu, M., He, H., Rajaraman, A., Leskovec, J.: Seismic: a self-exciting point process model for predicting tweet popularity. In: Proceedings of KDD 2015, pp. 1513–1522 (2015) 21. Zhou, K., Zha, H., Song, L.: Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes. In: Proceedings of AISTATS 2013, pp. 641–649 (2013)
An Analytical Approximation of Simplicial Complex Distributions in Communication Networks Ke Shen and Mayank Kejriwal(B) University of Southern California, Los Angeles, CA 90292, USA {keshen,kejriwal}@isi.edu https://usc-isi-i2.github.io/kejriwal/
Abstract. In recent years, there has been a growing recognition that higher-order structures are important features in real-world networks. A particular class of structures that has gained prominence is known as a simplicial complex. Despite their application to complex processes such as social contagion and novel measures of centrality, not much is currently understood about the distributional properties of these complexes in communication networks. Furthermore, it is also an open question as to whether an established growth model, such as scale-free network growth with triad formation, is sophisticated enough to capture the distributional properties of simplicial complexes. In this paper, we use empirical data on five real-world communication networks to propose a functional form for the distributions of two important simplicial complex structures. We also show that, while the scale-free network growth model with triad formation captures the form of these distributions in networks evolved using the model, the best-fit parameters are significantly different between the real network and its simulated equivalent. An auxiliary contribution is an empirical profile of the two simplicial complexes in these five real-world networks. Keywords: Simplicial complex · communication networks higher-order structures · analytical approximation
1
·
Background
Complex systems have undergone intense, interdisciplinary study in recent decades, with network science [3,28] having emerged as a viable framework for understanding complexity in domains ranging from economics and finance [15,16,22,23,26], to ‘wicked’ social problems such as human trafficking and terrorism [21,24,25,40]. Communication networks, as well as many other natural and social networks that are modeled as complex systems, have the scale-free topology in common. The preferential attachment model [8,44] has been suggested as a candidate network evolution or ‘growth’ model to yield such topologies in complex networks by formalizing the intuition that highly connected c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 16–26, 2024. https://doi.org/10.1007/978-3-031-53503-1_2
Simplicial Complex Distributions
17
Fig. 1. Illustration of an S* simplicial complex and T* simplicial complex, defined in the text.
nodes increase their connectivity faster than their less connected peers. The degree distribution of such networks exhibits power-law scaling [20]. While early studies in network science tended to be limited to lower-order structures like dyadic links or edges [14,32,34,37] (and later, triangles), a recent and growing body of research has revealed that deep insights can be gained from the systematic study of non-simple networks, multi-layer networks [33] and ‘higher-order’ structures [46] in simple networks [1,5,13,19,43]. One such higher-order structure that continues to undergo study is a simplicial complex (often just referred to as a ‘complex’) [4,17,42]. The study of simplicial complexes first took root in mathematics (especially, algebraic topology) [6,10,27,30,31], but in the last several years, have found practical applications in network science (as discussed in Related Work ). Figure 1 provides a practical example of two such simplicial complexes that have been studied in the literature, especially in theoretical biology and protein interaction networks. Due to space limitations, we do not provide a full formal definition; a good reference is [9], who detailed some of their properties and even proposed centrality measures due to their importance. An S-complex1 is defined by a ‘central’ edge A-B, with one or more triangles sharing that edge. A T-complex is similar but the central unit is a triangle (A-B-C). Furthermore, non-central (or peripheral) triangles in a T-complex should not also participate in quads with the central triangle i.e., given central triangle A-B-C and peripheral triangle A-B-V3, there should be no link between V3 and C in a valid T-complex. As we detail subsequently, the adjacency factor of either an S*- or T*-complex is the number of triangles flanking the central structure (an edge or triangle respectively). Simplicial complexes have been widely used to analyze aspects of diverse multilayer systems, including social relation [45], social contagion [35], protein interaction [39], linguistic categorization [12], and transportation [29]. New measurements, such as simplicial degree [39], simplicial degree based centralities [9,38], and random walks [36] have all been proposed to not only measure the 1
Technically, we refer to these in this paper as S*- and T-* complexes, with the * indicating that we are considering the maximal definition of the complex e.g., an S*-complex is not a strict sub-graph of another S-complex.
18
K. Shen and M. Kejriwal
relevance of a simplicial community and the quality of higher-order connections, but also the dynamical properties of simplicial networks. However, to the best of our knowledge, the distributional properties of such complexes, especially in the context of communication networks, have not been studied so far. A methodology for conducting such studies has also been lacking. Therefore, given the growing recognition that these two structures play an important role in real networks, and with this brief background in place, we propose to investigate the following research questions (RQs): RQ1: In real-world communication networks, what are the respective distributions of S*- and T*-complexes? Can good functional fits be found for these distributions? RQ2: Can (and to what extent) the scale-free network growth model (with triad formation) accurately capture these distributions? Or are additional parameters and steps (beyond triad formation) needed to model these higherorder structural properties in real-world networks?
2
Methodology
Since our primary goal in this paper is to understand whether (and to what extent) the scale-free network growth (with triad formation) model can accurately and empirically capture the two simplicial complexes described in the introduction, we first briefly recap the details of the growth model below. Full details are provided in [18]. 2.1
Scale-Free Network Growth with Triad Formation
Networks with the power-law degree distribution have been classically modeled by the scale-free network model of Barab´ asi and Albert (BA) [2]. In the original BA model, the initial condition is a network with n0 nodes. In each growth timestep, an incoming node v is connected using m edges to existing nodes in the network. The connections are determined using preferential attachment (PA), wherein an edge between v and another node w in the network is established with probability proportional to the degree of w. The growth model informally described above is known to generate a network with the power-law degree distribution; however, other work has found that such networks lack triadic properties (including observed clustering coefficient) in real networks. In order to incorporate such higher-order properties, the growth step in BA model was extended by [18] to include a triad formation (TF) step. Specifically, given that an edge between nodes v and w was attached using preferential attachment, an edge is also established from v to a random neighbor of w with some probability. If all neighbors of w are connected to v, this step does not apply. In summary, when a ‘new’ node v comes in, a PA step will first be performed, and then a TF step will be performed with probability Pt (in other words, the probability of PA without TF is 1−Pt ). These two steps are performed repeatedly
Simplicial Complex Distributions
19
per incoming node until m edges are added to the network. Pt is the control parameter in the model. It has been shown to have a linear relationship with the network’s average (over all nodes) clustering coefficient2 . Table 1. Details on five real communication networks (including average clustering coefficient) used in this paper.
Number of Nodes Number of Edges Average Clustering Coefficient Email-Enron 36,265 Email-DNC
111,179
0.16
1,866
4,384
0.21
Email-EU
32,430
54,397
0.11
Uni. of Kiel
57,189
92,442
0.04
Phone Calls
36,595
56,853
0.14
2.2
Adjacency Factor
To understand the distributional properties of the S*- and T*- complexes in the generated network versus real communication networks, we use the notion of the adjacency factor. From the earlier definition, we know that an S*-complex is defined by a ‘central’ edge (A-B in Fig. 1 (a)) that is adjacent to a certain number of triangles. Given an edge in the network, therefore, we denote the adjacency factor (with respect to S*-complexes) as the (maximal) number of triangles adjacent to that edge. For example, the adjacency factor of edge A-B in Fig. 1 (a) would be 3, not 1 or 2. While we record adjacency factors of 0 also3 to obtain a continuous distribution, only cases where adjacency factor is greater than 0 constitute valid S*-complexes. Similarly, the adjacency factor (with respect to T*-complexes) applies to triangles in the network. For every triangle A-B-C (see Fig. 1 (b)), the adjacency factor is the (maximal) number of triangles adjacent to it4 in the T*-complex configuration. If no (non-quad) triangles are adjacent to any of the edges of the central A-B-C triangle, then the adjacency factor is 0, meaning that the triangle does not technically participate in a T*-complex. Hence, depending on whether we are studying and comparing S*- or T*complex distributions, an adjacency factor can be computed for each edge and each triangle (respectively) in the network. We compute a frequency distribution over these adjacency factors to better contrast these higher-order structures in the grown versus the actual networks from a distributional standpoint. 2
A measure of the degree of clustering, the clustering coefficient γv of node v is v )| , where |E(Γv )| is the number of edges that exist between node v’s given by k|E(Γ v (kv −1) 2
3 4
neighbors. These are edges that are not part of any triangles. But subject to the ‘quad’ constraint noted in the Introduction.
20
3
K. Shen and M. Kejriwal
Experiments
Fig. 2. Frequency distribution of adjacency factors (described in the text) of S*complexes in the real Email-EU, Email-DNC, and University of Kiel email networks (distributions for the other two datasets are qualitatively similar) and the generated networks with the PA-TF generated networks sharing the same number of nodes, edges, and average clustering coefficient (Table 1) as their real counterparts. In all plots, the actual adjacency-factors distribution is always shown as a solid line and the corresponding estimated function with best-fit parameters (Eqs. 1) as a dashed line. Both figures are on a log-log (with base 10) scale.
We use five publicly available communication networks in our experiments, including Enron email communication network (Email-Enron5 ), 2016 Democratic National Committee email leak network (Email-DNC6 ), a European research institution email data network (Email-EU7 ), the email network based on traffic data collected for 112 d at University of Kiel, Germany [7], and a mobile communication network [41]. Details are shown in Table 1. These networks are available publicly and some (such as Enron) have been extensively studied, but to our knowledge studies involving simplicial complexes and their properties have been non-existent with respect to these communication networks. While our primary goal here is not to study these properties for these specific networks, a secondary contribution of the results that follow is that they do shed some light on the extent and distribution of such complexes in these networks. In the Introduction, we had introduced two separate (but related) research questions. Below, we discuss both individually, although both rely on a shared set of results. RQ1: For each network, using the numbers of nodes and edges, and the observed average clustering coefficient, we generate 10 networks using the PA-based 5 6 7
http://snap.stanford.edu/data/email-Enron.html. http://networkrepository.com/email-dnc.php. http://networkrepository.com/email-EU.php.
Simplicial Complex Distributions
21
Fig. 3. Frequency distribution of adjacency factors (described in the text) of T*complexes in the real Email-EU, Email-DNC, and University of Kiel email networks (distributions for the other two datasets are qualitatively similar). The actual adjacency-factors distribution is shown as a solid line, and the corresponding estimated function with best-fit parameters (Eqs. 2) as a dashed line.
growth model (with TF). We obtain the frequency distributions (normalized to resemble a probability distribution) of adjacency factors of T*- and S*-complexes in both the real and generated networks, and visualize these distributions in8 Fig. 2 and 3. Besides the direct comparison between the distribution curves, the figures suggest two functions that could fit the distributions (for the S*- and T*-complexes respectively): fS∗ (x; a, b, c) = c(bx−a )log x fT ∗ (x; μ, λ, σ) =
λ λ (2μ+λσ2 −2x) μ + λσ 2 − x √ e2 ) erf c( 2 2σ
(1) (2)
∞ 2 where erf c(x) = √2π x e−t dt. Both functional fits were discovered empirically using the Enron dataset as a ‘development’ set; however, as we show in response to RQ2, the functions fit quite consistently for all five datasets (but with different parameters, of course), although the first function diverges after a point (when the long tail begins). A theoretical basis for the functions is an interesting open question. We note that the second function is an Exponentially modified Gaussian (EMG) distribution, which is an important and general class of models for capturing skewed distributions. It has been broadly studied in mathematics, and has found empirical applications as well [11]. RQ2: We tabulate the best-fit parameters for each real world network, and the generated networks, in Table 2 and 3. For the real world networks, there 8
The differences between generated networks corresponding to the same real network were found to be very minor, so we just show one such network (per real network) in the Fig. 2. However, subsequently described statistical analyses make use of all the generated networks.
22
K. Shen and M. Kejriwal
Table 2. Best-fit parameter estimates for Eq. 1 in both the real / generated network. The MND and reference baseline is described in the text. a / a
b / b
c / c
M N D / M N D / ref. M N D
Email-Enron 0.25/0.82 0.75/0.59 0.19/0.53 0.68/0.79/0.71 Email-DNC
0.07/1.25 0.50/0.15 0.18/0.85 0.64/0.33/0.71
Email-EU
0.06/1.79 0.33/0.09 0.33/0.92 0.67/0.34/0.74
Uni. of Kiel
0.16/1.35 0.15/0.02 0.67/0.97 0.49/0.47/0.55
Phone Calls
0.65/1.78 0.44/0.11 0.55/0.90 0.56/0.28/0.84
Table 3. Best-fit parameter estimates for Eq. 2 in both the real / generated network. The MND and reference baseline is described in the text. λ / λ
μ / μ
Email-Enron 0.02/0.21 6.82/0.87
σ / σ
M N D / M N D / ref. M N D
5.08/0.79 0.37/0.83 /0.91
Email-DNC
0.07/0.76 10.86/0.00 5.06/0.00 0.33/0.53 /0.90
Email-EU
0.05/1.59 13.81/0.00 7.58/0.00 0.70/0.84 /0.90
Uni. of Kiel
0.03/3.13 0.37/0.00
0.57/0.00 0.68/1.07 /0.74
Phone Calls
0.43/1.42 0.00/0.00
0.00/0.00 0.44/0.79 /0.76
is only one set of best-fit estimates. For the generated networks there are ten best-fit estimates per parameter (since we generate 10 networks per real-world network), for which we report the average in the table. We also compute a 2-tailed Student’s t-test and found that, for all parameters and all networks, the generated networks’ (averaged) parameter is significantly different from the corresponding real network’s best-fit parameter at the 99% level. This suggests, intriguingly, that despite the distributional similarities between Fig. 2 (a) and (b) (i.e., between the real and generated networks) the best-fit parameters are significantly different in both cases. Of course, this does not answer the question as to whether the functions that we empirically discovered and suggested in Eqs. 1 and 2 are good approximations or models for the actual distributions. To quantify such a ‘goodness of fit’ between an actual frequency distribution curve (whether for the real network or the generated networks) and the curve obtained by using the models suggested in Eq. 1 or 2 (with best-fit parameter estimates), we compute a metric called Mean Normalized Deviation (MND). This metric is modeled closely after the root mean square error (RMSE) metric. Given an actual curve f and a modeled curve f , defined on a common support9 (x-axis) X = {x1 , x2 , . . . xn }, the MND is given by: M N D(f, f , X) = 9
1 |f (x) − f (x)| |X| x∈X f (x)
In our case, this is simply the set of adjacency factors.
(3)
Simplicial Complex Distributions
23
Note that the lower the MND, the better f fits f on support X. The MND can never be negative for a positive function f , but it has no upper bound. Hence, a reference is needed. Since we are not aware of any other candidate functional fits for the simplicial complex distributions in the literature, we use a simple (but functionally effective) baseline, namely the horizontal curve y = c, where c is a constant that is selected to roughly coincide with the long-tail of the corresponding real network’s distribution. In Table 2, we report not just the MNDs of the real and grown networks, but also the corresponding reference MND. Because of the significant long tails in Fig. 2, this MND is already expected to be low. We find, however, that with only three exceptions (over both equations10 ) for the generated networks (and none for the real networks) does the reference fit the actual distributions better than our proposed models (through a lower MND), despite being optimized to almost coincide with the long tail. Interestingly, Eq. 2 has (much) lower MND scores for the real network compared to the grown networks, as well as the reference function. As we noted earlier, Eq. 1 did not seem to be capturing the long tail accurately. We hypothesize that a piecewise function, where Eq. 1 is only used for modeling the short tail of the S*-complex frequency distribution, may be a better fit. In all cases, investigating the theory of this phenomenon is a promising area of investigation for complex systems research.
Fig. 4. The best-fit parameter a and λ in the estimated function of S*- (left) and T*complexes (right) in generated networks with different numbers of nodes (N) and edges (M) configurations. P denotes the probability of TF step. 10
Specifically, on the Eq. 1 model, the MND’ (average over generated networks) is higher for Email-Enron than for the reference; on Equation refeq2, the MND’ is higher for both Uni. of Kiel and Phone Calls.
24
K. Shen and M. Kejriwal
In our exploration, we further visualized the relationship between the estimated function’s parameter of two simplices distribution and the configuration setting P of the generated network in Fig. 4. Intriguingly, despite variations in the configuration of the generated networks, the best-fit estimates seem to exhibit convergence behavior. Specifically, as different settings for the probability of TF steps in network generation are applied, the parameter, such as λ in T*simplices distribution estimation, tends to stabilize towards a consistent value. This observed consistency may hint at some inherent properties of the two highorder structures. Such invariance across diverse network configurations not only underscores the potential utility and reliability of the parameter estimation but also points towards deeper, fundamental characteristics of these structures.
4
Conclusion
Simplicial complexes have become important in the last several years for modeling and reasoning about higher-order structures in real networks. Many questions remain about these structures, including whether they are captured properly by existing (and now classic) growth models. In this paper, we showed that, for two well-known complexes, the PA-model with triad formation captures the distributional properties of the complexes, but the best-fit parameters are significantly different between the grown networks and the real communication networks. It remains an active area of research to better understand the theoretical underpinnings of our proposed functional fits for the simplicial complex distributions, and also to deduce what could be ‘added’ to the growth model to bring its parameters into alignment with the real-world network. We are also investigating the properties of other growth models with respect to accurately capturing these distributions. Finally, understanding the real-world phenomena modeled by these complexes, which are fairly common motifs in all five networks we studied, continues to be an interesting research avenue in communication (and other complex) systems.
References 1. Albert, R., Barab´ asi, A.L.: Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47–97 (2002). https://link.aps.org/doi/10.1103/RevModPhys.74.47 2. Barab´ asi, A.L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999). https://science.sciencemag.org/content/286/5439/509 3. Barab´ asi, A.L., et al.: Network science. Cambridge university press (2016) 4. Barbarossa, S., Sardellitti, S., Ceci, E.: Learning from signals defined over simplicial complexes. In: 2018 IEEE Data Science Workshop (DSW), pp. 51–55. IEEE (2018) 5. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.U.: Complex networks: structure and dynamics. Phys. Rep. 424(4), 175–308 (2006). http://www. sciencedirect.com/science/article/pii/S037015730500462X 6. Costa, A., Farber, M.: Random simplicial complexes. In: Callegaro, F., Cohen, F., De Concini, C., Feichtner, E.M., Gaiffi, G., Salvetti, M. (eds.) Configuration Spaces. SIS, vol. 14, pp. 129–153. Springer, Cham (2016). https://doi.org/10.1007/ 978-3-319-31580-5 6
Simplicial Complex Distributions
25
7. Ebel, H., Mielsch, L.I., Bornholdt, S.: Scale-free topology of e-mail networks. Phys. Rev. E 66, 035103 (2002).https://link.aps.org/doi/10.1103/PhysRevE.66.035103 8. Eisenberg, E., Levanon, E.Y.: Preferential attachment in the protein network evolution. Phys. Rev. Lett. 91(13), 138701 (2003) 9. Estrada, E., Ross, G.J.: Centralities in simplicial complexes. applications to protein interaction networks. J. Theoretical Biol. 438, 46–60 (2018) 10. Faridi, S.: The facet ideal of a simplicial complex. Manuscripta Math. 109(2), 159–174 (2002) 11. Foley, J.P., Dorsey, J.G.: A review of the exponentially modified gaussian (emg) function: evaluation and subsequent calculation of universal data. J. Chromatogr. Sci. 22(1), 40–46 (1984) 12. Gong, T., Baronchelli, A., Puglisi, A., Loreto, V.: Exploring the roles of complex networks in linguistic categorization. Artif. Life 18(1), 107–121 (2011). https:// doi.org/10.1162/artl a 00051 13. Guilbeault, D., Becker, J., Centola, D.: Complex contagions: a decade in review. In: Lehmann, S., Ahn, Y.-Y. (eds.) Complex Spreading Phenomena in Social Systems. CSS, pp. 3–25. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-7733221 14. Hagberg, A., Swart, P., S Chult, D.: Exploring network structure, dynamics, and function using networkx. Tech. rep., Los Alamos National Lab.(LANL), Los Alamos, NM (United States) (2008) 15. Hidalgo, C.A.: Economic complexity theory and applications. Nat. Rev. Phys. 3(2), 92–113 (2021) 16. Hidalgo, C.A., Hausmann, R.: The building blocks of economic complexity. Proc. Natl. Acad. Sci. 106(26), 10570–10575 (2009) 17. Hofmann, S.G., Curtiss, J., McNally, R.J.: A complex network perspective on clinical science. Perspect. Psychol. Sci. 11(5), 597–605 (2016) 18. Holme, P., Kim, B.: Growing scale-free networks with tunable clustering. Phys. Rev. E, Stat. Nonlinear Soft Matter Phys. 652 Pt 2, 026107 (2002) 19. Iacopini, I., Petri, G., Barrat, A., Latora, V.: Simplicial models of social contagion. Nat. Commun. 10(1), 1–9 (2019) 20. Jeong, H., N´eda, Z., Barab´ asi, A.L.: Measuring preferential attachment in evolving networks. EPL (Europhys. Lett.) 61(4), 567 (2003) 21. Kejriwal, M.: Link prediction between structured geopolitical events: Models and experiments. Front. Big Data 4, 779792 (2021) 22. Kejriwal, M.: On using centrality to understand importance of entities in the panama papers. PLoS ONE 16(3), e0248573 (2021) 23. Kejriwal, M., Dang, A.: Structural studies of the global networks exposed in the panama papers. Appli. Netw. Sci. 5(1), 1–24 (2020) 24. Kejriwal, M., Gu, Y.: Network-theoretic modeling of complex activity using UK online sex advertisements. Appli. Netw. Sci. 5, 1–23 (2020) 25. Kejriwal, M., Kapoor, R.: Network-theoretic information extraction quality assessment in the human trafficking domain. Appli. Netw. Sci. 4(1), 1–26 (2019) 26. Kejriwal, M., Luo, Y.: On the empirical association between trade network complexity and global gross domestic product. In: International Conference on Complex Networks and Their Applications. pp. 456–466. Springer (2022). doi: https:// doi.org/10.1007/978-3-031-21127-0 37 27. Knill, O.: The energy of a simplicial complex. Linear Algebra and its Applications (2020) 28. Lewis, T.G.: Network science: Theory and applications. John Wiley & Sons (2011)
26
K. Shen and M. Kejriwal
29. Lin, J., Ban, Y.: Complex network topology of transportation systems. Transp. Rev. 33(6), 658–685 (2013) 30. Maria, C., Boissonnat, J.-D., Glisse, M., Yvinec, M.: The gudhi library: simplicial complexes and persistent homology. In: Hong, H., Yap, C. (eds.) ICMS 2014. LNCS, vol. 8592, pp. 167–174. Springer, Heidelberg (2014). https://doi.org/10.1007/9783-662-44199-2 28 31. Milnor, J.: The geometric realization of a semi-simplicial complex. Annal. Math, 357–362 (1957) 32. Milward, H.B., Provan, K.G.: Measuring network structure. Public Administration 76(2), 387–407 (1998) 33. Mitchison, G., Durbin, R.: Bounds on the learning capacity of some multi-layer networks. Biol. Cybern. 60(5), 345–365 (1989) 34. Motter, A.E., Zhou, C., Kurths, J.: Enhancing complex-network synchronization. EPL (Europhys. Lett.) 69(3), 334 (2005) 35. Pastor-Satorras, R., Castellano, C., Van Mieghem, P., Vespignani, A.: Epidemic processes in complex networks. Rev. Mod. Phys. 87(3), 925 (2015) 36. Schaub, M.T., Benson, A.R., Horn, P., Lippner, G., Jadbabaie, A.: Random walks on simplicial complexes and the normalized hodge 1-laplacian. SIAM Rev. 62(2), 353–391 (2020) 37. Seidman, S.B.: Network structure and minimum degree. Soc. Netw. 5(3), 269–287 (1983) 38. Serrano, D.H., G´ omez, D.S.: Centrality measures in simplicial complexes: Applications of topological data analysis to network science. Appl. Math. Comput. 382, 125331 (2020) 39. Serrano, D.H., Hern´ andez-Serrano, J., G´ omez, D.S.: Simplicial degree in complex networks. applications of topological data analysis to network science. Chaos, Solitons Fractals 137, 109839 (2020) 40. Skaburskis, A.: The origin of “wicked problems.” Planning Theory Pract. 9(2), 277–280 (2008) 41. Song, C., Qu, Z., Blumm, N., Barab´ asi, A.L.: Limits of predictability in human mobility. Science 327(5968), 1018–1021 (2010). https://science.sciencemag.org/ content/327/5968/1018 42. Torres, J.J., Bianconi, G.: Simplicial complexes: higher-order spectral dimension and dynamics. J. Phys. Complexity 1(1), 015002 (2020) 43. Torres, L., Blevins, A.S., Bassett, D.S., Eliassi-Rad, T.: The why, how, and when of representations for complex systems. arXiv preprint arXiv:2006.02870 (2020) 44. V´ azquez, A.: Growing network with local rules: Preferential attachment, clustering hierarchy, and degree correlations. Phys. Rev. E 67, 056104 (2003). https://link. aps.org/doi/10.1103/PhysRevE.67.056104 45. Wang, D., Zhao, Y., Leng, H., Small, M.: A social communication model based on simplicial complexes. Phys. Lett. A 384(35), 126895 (2020) 46. Xu, J., Wickramarathne, T.L., Chawla, N.V.: Representing higher-order dependencies in networks. Sci. Adv. 2(5), e1600028 (2016)
A Dynamic Fitting Method for Hybrid Time-Delayed and Uncertain Internally-Coupled Complex Networks: From Kuramoto Model to Neural Mass Model Zhengyang Jin1,2(B) 1 University of Sussex, Brighton BN1 9RH, UK
[email protected] 2 Institute for Infocomm Research (I2R), A*STAR, Singapore 138632, Singapore
Abstract. The human brain, with its intricate network of neurons and synapses, remains one of the most complex systems to understand and model. The study presents a groundbreaking approach to understanding complex neural networks by introducing a dynamic fitting method for hybrid time-delayed and uncertain internally-coupled complex networks. Specifically, the research focuses on integrating a Neural Mass Model (NMM) called Jansen-Rit Model (JRM) with the Kuramoto model, by utilizing real human brain structural data from Diffusion Tensor Imaging (DTI), as well as functional data from Electroencephalography (EEG) and Magnetic Resonance Imaging (MRI), the study extends above two models into a more comprehensive brain-like model. This innovative multimodal model enables the simultaneous observation of frequency variations, synchronization states, and simulated electrophysiological activities, even in the presence of internal coupling and time delays. A parallel fast heuristics algorithm serves as the global optimization method, facilitating rapid convergence to a stable state that closely approximates real human brain dynamics. The findings offer a robust computational tool for neuroscience research, with the potential to simulate and understand a wide array of neurological conditions and cognitive states. This research not only advances our understanding of complex neural dynamics but also opens up exciting possibilities for future interdisciplinary studies by further refine or expand upon the current model. Keywords: Complex networks · Computational neuroscience · Dynamic model fitting · Neural mass model · Kuramoto model
1 Introduction The human brain stands as an intricate and dynamic network, comprised of billions of neurons interconnected by trillions of synapses. These neural components engage in complex electrochemical signaling mechanisms, orchestrating a symphony of interactions. This colossal and labyrinthine architecture is not merely responsible for rudimentary physiological functions such as respiration and cardiac rhythm; it also underpins © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 27–38, 2024. https://doi.org/10.1007/978-3-031-53503-1_3
28
Z. Jin
advanced cognitive activities including memory, emotion, decision-making, and creative thought [1]. Consequently, the human brain can be conceptualized as a multi-layered, multi-scaled, and highly adaptive complex system. Its operational intricacies continue to pose some of the most formidable challenges in disciplines ranging from neuroscience to computational science [2]. Owing to the brain’s staggering complexity and dynamic nature, scientists have been in relentless pursuit of effective models and methodologies to delineate and comprehend its operational mechanics. Among these, the Neural Mass Model (NMM) offers a valuable framework for investigating brain dynamics at macroscopic or mesoscopic scales [3]. This model simplifies the intricate web of interacting neurons into one or multiple ‘neural masses,’ thereby rendering the analysis of complex brain networks more tractable [4]. The NMM is particularly germane to the interpretation of neuroimaging data such as Electroencephalography (EEG) and Functional Magnetic Resonance Imaging (fMRI), as these modalities typically capture the collective activity of neuronal assemblies rather than the behavior of individual neurons [5]. Utilizing the Neural Mass Model enables researchers to gain deeper insights into fundamental characteristics of brain network dynamics, such as synchronization, rhythm generation, and information propagation [6]. However, it is worth noting that the model’s reductive nature may limit its capacity to capture more nuanced neural activities or intricate network structures [3]. Concurrently, another pivotal framework for understanding brain network interactions is encapsulated in the concept of “Communication through Coherence” (CTC) [7]. In contrast to the Neural Mass Model, which primarily focuses on macroscopic electrical activities, CTC emphasizes the mechanisms by which neurons or neural masses achieve effective information transfer through coherent oscillatory activities [8]. Within this paradigm, coherent neural oscillations are posited as a conduit through which distantly located neural masses can transiently “lock” into specific phase relationships, thereby facilitating efficient information exchange. This coherence is not confined to oscillations at identical frequencies but can manifest across different frequency bands, engendering so-called “nested” oscillations. Such a multi-layered structure of coherence furnishes the brain with a flexible and efficient communication platform, enabling self-organization and adaptability across varying cognitive and behavioral states. Thus, CTC and the Neural Mass Model can be viewed as complementary tools, collectively illuminating the multifaceted dynamics of complex brain networks [9]. Worthy of mention in the same breath as models that describe coherence is the Kuramoto model, a mathematical construct frequently employed to describe the synchronization of large networks of oscillators [10]. This model employs a set of succinct differential equations to simulate the phase dynamics of oscillators, such as neurons or neural masses, thereby capturing synchronization phenomena within complex networks [11]. Numerous studies based on the Kuramoto model have ventured into exploring interactions between different brain regions and how these interactions influence or reflect various cognitive states and functions [12, 13]. In synergy with CTC and the Neural Mass Model, the Kuramoto model offers a quantitative approach to assessing brain network coherence, particularly when considering network topology and oscillator interactions. In the present study, we propose to integrate structural data extracted from real human brains, such as Diffusion Tensor Imaging (DTI), along with individual information data
A Dynamic Fitting Method for Hybrid Time-Delayed
29
like Electroencephalography (EEG) and Magnetic Resonance Imaging (MRI), into the Neural Mass Model. This integration extends the model into a comprehensive Brain Mass Model (BMM) that accentuates its capacity to represent biophysical and physiological characteristics. The Jansen-Rit model is selected as the Neural Mass Model template to be applied on real human brain data, offering a mesoscopic description of physiological signals from neural clusters, as well as capturing influences from pathological conditions, pharmacological interventions, and external stimuli. Furthermore, we employ the same real human brain structural data to augment the Kuramoto model, aligning its architecture with that of the BMM. This enables oscillator alignment at the network node level. We then attempt dynamic fitting of the extended Jansen-Rit model using the Kuramoto model, thereby facilitating simultaneous observation of mesoscopic electrophysiological signals and regional brain synchronization. In essence, this research enables multi-attribute observation of network states through dynamic fitting of complex systems, paving the way for simulating oscillatory coherence in the neural system following rhythmic or state changes. Given that both systems are mathematically expressed through differential equations—with the Kuramoto model outputting phase and the Jansen-Rit model outputting simulated local field potentials—the fitting process necessitates higher-order parameters. Considering the highdimensional and non-convex nature of the task, parallel fast heuristics algorithms are chosen as the global optimization method. Although the objective of this work is model fitting based on real data, it suffices to demonstrate that dynamic fitting of complex network systems can be applied to any potential models of the same category. It allows for the testing of multiple working states and attributes of the architecture under specific thematic data, while also enabling observation of structural changes in one model based on adjustments to the parameters of another. Lastly, we discuss the feasibility of applying this research to simulate the dynamics under various potential oscillatory states within neural systems, as well as the prospects for enhancing the accuracy and speed of the fitting process through the expansion of our original system.
2 Method 2.1 Real Human Brain Data Structure In the present study, the estimation of structural brain networks was based on Diffusion Tensor Imaging (DTI) data, from Cabral’s (2014) research [14]. All magnetic resonance scans were conducted on a 1.5 T MRI scanner, utilizing a single-shot echo-planar imaging sequence to achieve comprehensive brain coverage and capture multiple non-linear diffusion gradient directions. To define the network nodes, the brain was parcellated into 90 distinct regions, guided by the Automated Anatomical Labeling (AAL) template [15]. Data preprocessing involved a series of corrections using the Fdt toolbox in the FSL software package ([25], FMRIB), aimed at rectifying image distortions induced by head motion and eddy currents. Probabilistic fiber tracking was executed using the probtrackx algorithm to estimate the fiber orientations within each voxel. For connectivity analysis, probabilistic tractography sampling was performed on fibers passing through each voxel
30
Z. Jin
[16]. This data served as the basis for both voxel-level and region-level analyses. Connectivity between different brain regions was ascertained by calculating the proportion of fibers traversing each region. Ultimately, two 90 × 90 matrices, Cij andDij „ were generated (see Fig. 1). Cij characterizes the connectivity and strength of connections between brain regions, whereas Dij represents the communicative distance between them. To normalize these matrices, all off-diagonal elements were divided by their mean value, setting the mean to one. Dij was also subjected to normalization to adapt to a discrete-time framework. This was accomplished by dividing D by the mean of all values greater than zero in the C matrix, followed by scaling through a simulated unit time and subsequent integer rounding for direct matrix indexing. (Estimation is carried out based on the Euclidean distance between the centroids of the segmented regions[14]). 2.2 Extended Neural Mass Model with Coupling Strength and Time Delay Neural Mass Models (NMMs) serve as mesoscopic mathematical frameworks designed to capture the dynamic behavior of brain regions or neuronal assemblies. Unlike models that simulate the activity of individual neurons, NMMs aim to understand the integrated behavior of neural systems at a higher level of abstraction [3–5]. They typically employ a set of differential equations to describe the interactions between different types of neuronal populations, such as excitatory and inhibitory neurons, and their responses to external inputs. These models have found extensive applications in the analysis of neuroimaging data, including Electroencephalography (EEG) and Functional Magnetic Resonance Imaging (fMRI), as well as in simulating the dynamic expressions of neurological disorders like epilepsy and Parkinson’s disease [17]. One notable instantiation of NMMs is the Jansen-Rit Model (JRM), which has been frequently employed in past research to characterize specific large-scale brain rhythmic activities, such as Delta (1–4 Hz), Theta (4–8 Hz), and Alpha (8–12 Hz) waves. The JRM typically comprises three interconnected subsystems that represent pyramidal neurons, inhibitory interneurons, and excitatory interneurons. These subsystems are linked through a set of fixed connection weights and can receive one or multiple external inputs, often modeled as Gaussian white noise or specific stimulus signals. In previous research, the basic Jansen-Rit model has been extensively described, including structure, equations and parameter settings [18]. The JRM employs a set of nonlinear differential equations to describe the temporal evolution of the average membrane potentials within each subsystem, their interactions, and their responses to external inputs. In the present study, we extend the foundational single JRM to a network-level system comprising multiple JRMs. Each node in this network communicates based on the AAL brain structure data described in Sect. 2.1, and a more vivid depiction can be found in Fig. 1. Given that local circuits are now expanded into large-scale, brain-like network circuits, nodes need to receive signals emitted from other nodes. This internode communication is governed by a parameter Kij , which reflects the connectivity across areas. Additionally, considering the influence of inter-regional distances on signal transmission, the signals between nodes are also modulated by a parameter τij , which represents the unit time required for a signal to reach a designated node. As a result, we
A Dynamic Fitting Method for Hybrid Time-Delayed
31
Fig. 1. Description of the real human brain structural template. (a) Indicates the brain region location corresponding to each node. (b) and (c) describe the coupling strength between brain regions, while (d) and (e) describe the time delays between them. (f) describes the virtual anatomical structure of the brain network model, as well as how each node is represented as either a Jansen-Rit model or a Kuramoto model.
can extend the original equations to accommodate these additional factors. ⎧ ⎪ y˙ 0i (t) = y3 i (t), ⎪ ⎪ ⎪ ⎪ y˙ 1 (t) = y4 i (t), ⎪ ⎪ ⎪ ⎨ y5i (t), i y˙ 2 (t) = i i y2 (t) − 2ηe y3 i (t) − ηe 2 y0 i (t), ⎪ y˙3 (t) = Ge ηe S y1 (t) − ⎪ ⎪ i ij N ⎪ ⎪ y˙4 (t) = Ge ηe p(t) + C2 S C1 y0 i (t) + j=1,j=i K ij xj t − τD − 2ηe y4 i (t) − ηe 2 y1 i (t), ⎪ ⎪
⎪ ⎩ y˙5 i (t) = Gi ηi C4 S C3 y0 i (t) − 2ηi y5 i (t) − ηi 2 y2 i (t).
32
Z. Jin
2.3 Extended Kuramoto Model with Coupling Strength and Time Delay The Kuramoto model serves as a mathematical framework for describing the collective behavior of coupled oscillators. Proposed by Yoshiki Kuramoto in 1975, the model aims to capture the spontaneous synchronization phenomena observed in groups of coupled oscillators [10]. In its basic form, the Kuramoto model describes N phase oscillators through the following set of ordinary differential equations: K N sin θj − θi θ˙i = ωi + j=1 N Here, θ represents the phase of the oscillators, ωi denotes the natural frequency of the ith oscillator, N is the total number of oscillators, and K signifies the coupling strength between the oscillators. When the value of K is sufficiently low, it implies that the oscillators within the subsystem are in a weakly coupled state, operating more or less independently. As K increases and reaches a critical value Kc , the oscillators begin to exhibit synchronization. The coherence or order parameter of the Kuramoto model can be described using the following equation: 1 N iθ iψ j re = e j=1 N In this equation, eiθj is a complex number with a modulus of 1. The value of r ranges from 0 to 1, with values closer to 1 indicating a more synchronized system. To adapt the basic Kuramoto model to the current human brain network structure, the original equations can be reformulated as follows: θ˙i = ωi + Kij
N j=1
Cij sin θj (t − τij ) − θi (t)
Given that the human brain network structure includes a matrix Cij representing the connectivity between brain regions, the focus shifts to the influence between connectable network nodes. The global coupling parameter K is replaced by Kij , eliminating the need for averaging. In this context, the emphasis is on detecting the coherence between two specific oscillators i and j, which can be calculated using the following equation within a time window T: T 1 ei(θi (t)−θj (t−τij )) dt r = T 0 By adapting the Kuramoto model in this specialized context, the focus is shifted towards understanding the intricate relationships and synchronization phenomena between specific oscillators. This nuanced approach allows for the exploration of oscillator behavior in a more localized manner, contrasting with broader network models. It opens up avenues for investigating the subtleties of oscillator interactions, which could be particularly useful in specialized applications beyond the scope of traditional brain network models.
A Dynamic Fitting Method for Hybrid Time-Delayed
33
2.4 Dynamic Fitting for Two Extended Model As delineated in the preceding sections, the objective of this study is to dynamically fit the Jansen-Rit Model (JRM) and the Kuramoto model within the framework of the AAL human brain network structure. This integration aims to create a multimodal model that allows for the simultaneous observation of frequency variations, synchronization states, and simulated electrophysiological activities. Given that the JRM outputs time-varying electrical signals simulating membrane potentials, while the Kuramoto model outputs the phases of oscillators, a direct fitting of the outputs is not feasible. However, considering that both models yield time-varying signals, we can elevate them to a common output dimension by transforming their time-domain outputs into the frequency domain to obtain Power Spectrum Density (PSD). For continuous-time signals x(t), the PSD, S(f ), can be computed through Fourier Transform as follows: 2 1 T /2 x(t)e−i2π f t dt S(f ) = lim T→∞ T −T /2 In our specific case, dealing with discrete signals, we employ a window with a range equal to the maximum value of the delay matrix and calculate the PSD using the periodogram method: 2 1 N −1 1 x[n]) PSD[k] = FFT (x[n] − j=1 N N Here, FFT denotes the Fast Fourier Transform, and the input to the function is the signal with its DC component removed. The modulus is then squared and normalized. Our optimization strategy synergistically combines the inherent attributes of the models with the characteristics of their operational states, employing a segmented parameterfitting approach. Given the operational rules of the Kuramoto model, the natural frequencies often have a significant influence in the system’s initial state. Therefore, during the fitting process, we use the difference between the x-axis mappings of the peak PSD values of the two models as the initial loss. This initial loss serves as a guiding metric until the PSD peaks of both models closely align in the frequency domain. Subsequently, we proceed to the next phase of parameter adjustment. Given the varying connectivity across nodes, the coupling strength will determine the overall energy levels. Accordingly, the coupling strength K for each node is adjusted based on the current PSD differences; in other words, the height of the peak dictates the direction of K’s development. In our practical implementation, considering that the JRM operates under resting parameters and includes Gaussian white noise as input, we have opted not to introduce specific stimuli to simulate conditions the human brain might encounter. Therefore, we have not extended our segmented, multimodal parameter-fitting approach to include environment-sensitive adaptive state-switching, which would automatically select the parameters to be fitted based on changes in network states.
34
Z. Jin
3 Results In this investigation, an initial imperative step involved meticulous parameter optimization for the Jansen-Rit Model (JRM). Given the plethora of tunable parameters inherent to the JRM and our aspiration to operate the model under a unified parameter configuration, this optimization was quintessential. After exhaustive testing and calibration, we ascertained optimal values for several pivotal parameters to engender optimal outputs within the Alpha rhythm spectrum. Specifically, the Average Excitatory Synaptic Gain (A) was determined to be 3.25 mV, the Average Inhibitory Synaptic Gain (B) was 22 mV, the Average Synapse Number (C) was 135, and the Firing Threshold (PSP or v0) was 5.52 mV. Moreover, we scrutinized overarching resolution parameters that influence the model’s operation, including the Coupling Gain (G) = 10 and Mean Velocity (v) = 25 m/s. As depicted in Fig. 2, an optimal parameter configuration was identified to ensure the output remains ensconced within the Alpha rhythm domain. Upon the foundation of these optimized parameters, the system was initialized and executed, culminating in a stable output state as evidenced in the simulation results (see Fig. 3.). Since our Kuramoto model uses a consistent direction simulation, the x-axis coordinate can be used to determine the current operating frequency of most oscillators. Intriguingly, due to the intricate interplay of complex connections and parameters, certain nodes exhibited conspicuously elevated frequencies compared to others. Concurrently, a subset of relatively quiescent nodes manifested diminished activity ranges, contrasting starkly with nodes subjected to copious stimulation. These observations not only corroborate the model’s heterogeneity and complexity but also allude to the potential for further refinement through parameter scaling. Subsequently, we embarked on dynamic fitting of the steady-state JRM using the Kuramoto model. Astonishingly, the congruence between the two models, as gauged through Power Spectrum Density (PSD), was attained expeditiously within just 80 fitting segments. Initially, the natural frequencies of the Kuramoto model manifested multiple peaks across a 5-100Hz range in the PSD. However, as the preliminary fitting phase approached culmination, the Kuramoto model was observed to have transitioned into a frequency operation phase that was substantially congruent with that of the JRM. Specifically, after 70 fitting segments, the PSD curve closely approximated the target state of the JRM, and the loss descent curve plateaued, indicating that the system had reached a proximate operational state.
4 Discussion This study delves into a method for dynamic fitting of multimodal complex systems, aimed to dynamically fit the Jansen-Rit Model (JRM) and the Kuramoto model within the framework of the Automated Anatomical Labeling (AAL) human brain network structure. Our results indicate that the optimized parameters for the JRM were effective in simulating Alpha rhythms, and the Kuramoto model was able to rapidly adapt to the JRM’s steady-state output. Our findings corroborate previous studies that have employed
A Dynamic Fitting Method for Hybrid Time-Delayed
35
Fig. 2. Exploration of the system output for the Extended Jansen-Rit model. (a) Shows the output of the JRM in 90 brain regions (extracted from a window of 250,000 to 400,000 time units). (b) Displays the correlation maps between the underlying structural connectivity matrix and the FC matrices extracted from all combinations of model parameters, for different frequency bands [18]. (c) Presents the overall PSD (Power Spectral Density) of the model at this stage.
the JRM and Kuramoto models separately to understand brain dynamics [14, 18]. However, our work extends the existing literature by integrating these two models, thereby offering a more comprehensive tool for understanding complex neural dynamics. The successful integration of the JRM and Kuramoto models suggests that it is feasible to create a multimodal model for the simultaneous observation of frequency variations, synchronization states, and simulated electrophysiological activities. This could pave the way for more nuanced models that can better capture the multifaceted nature of brain dynamics. The ability to dynamically fitting has significant implications for neuroscience research, potentially offering a robust computational tool for simulating and understanding various neurological conditions or cognitive states. This could be particularly beneficial for the development of targeted therapeutic interventions. One salient limitation of our study resides in the utilization of standardized parameter settings for the JRM, which may not fully encompass the extensive spectrum of neural dynamics. Future investigations could delve into the ramifications of parameter
36
Z. Jin
Fig. 3. The fitting process of the Power Spectrum and the decline curve of the loss. (a) represents the parameter initialization stage, (b) marks the end of the first fitting stage, (c) indicates when the fitting stage reaches a steady state, and (d) shows the loss curve which represent the difference of two models’ PSD.
variations. Moreover, our research did not incorporate specific stimuli to emulate conditions that the human brain might encounter, leaving room for future exploration in this domain. For instance, the introduction of simulations for conditions such as epilepsy [20], Parkinson’s disease [21], Attention Deficit Hyperactivity Disorder (ADHD) [22], and cognitive function assessments [23] could be particularly enlightening. On another front, achieving higher fidelity in model fitting remains an imperative research objective. Given the rich output waveforms generated by the JRM, it would be myopic to confine ourselves to standard Kuramoto oscillators. The incorporation of limiters within the oscillators emerges as a promising avenue worth exploring. Alternatively, we could contemplate expanding each node into a fully connected—or otherwise interconnected— multi-oscillator Kuramoto sub-network, thereby potentially enriching the system’s output [24]. In this manner, the utilization of dual complex systems to study specific neural activities becomes feasible. Not only does this allow for the analysis of synchronization phenomena under such activities, but it also enables a multi-faceted examination that could yield more comprehensive insights.
References 1. Sporns, O.: Networks of the brain. MIT Press, Cambridge, Mass (2011)
A Dynamic Fitting Method for Hybrid Time-Delayed
37
2. Jirsa, V.K., Haken, H.: Field theory of electromagnetic brain activity. Phys. Rev. Lett. 77(5), 960–963 (1996) 3. Deco, G., Jirsa, V.K.m McIntosh, A.R.: Emerging concepts for the dynamical organization of resting-state activity in the brain. Nat. Rev. Neurosc. (2011) 4. David, O., Friston, K.J.: ‘A neural mass model for MEG/EEG. NeuroImage 20(3), 1743–1755 (2003) 5. Friston, K.J.: Modalities, modes, and models in functional neuroimaging. Science 326(5951), 399–403 (2009) 6. Breakspear, M.: Dynamic models of large-scale brain activity. Nat. Neurosci. 20(3), 340–352 (2017) 7. Fries, P.: A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends Cogn. Sci. 9(10), 474–480 (2005) 8. Fries, P.: Rhythms for Cognition: Communication through Coherence. Neuron 88(1), 220–235 (2015) 9. Canolty, R.T., Knight, R.T.: The functional role of cross-frequency coupling. Trends Cogn. Sci. 14(11), 506–515 (2010) 10. Kuramoto, Y.: Self-entrainment of a population of coupled non-linear oscillators. In: International Symposium on Mathematical Problems in Theoretical Physics, pp. 420–422. Springer, Berlin, Heidelberg (1975). https://doi.org/10.1007/BFb0013365 11. Strogatz, S.H.: From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D 143(1–4), 1–20 (2000) 12. Cabral, J., Hugues, E., Sporns, O., Deco, G.: Role of local network oscillations in resting-state functional connectivity. Neuroimage 57(1), 130–139 (2011) 13. Deco, G., Jirsa, V. K., McIntosh, A.R.: Emerging concepts for the dynamical organization of resting-state activity in the brain. Nat. Rev. Neurosci. 12(1) (2011) 14. Cabral, J., et al.: Exploring mechanisms of spontaneous functional connectivity in MEG: How delayed network interactions lead to structured amplitude envelopes of band-pass filtered oscillations. Neuroimage 90, 423–435 (2014) 15. Tzourio-Mazoyer, N., et al.: Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the mni mri single-subject brain. Neuroimage 15(1), 273–289 (2002) 16. Behrens, T.E.J., et al.: Probabilistic diffusion tractography with multiple fibre orientations: What can we gain? Neuroimage 34(1), 144–155 (2007) 17. Wendling, F., et al.: Epileptic fast activity can be explained by a model of impaired GABAergic dendritic inhibition: Epileptic activity explained by dendritic dis-inhibition. Eur. J. Neurosci. 15(9), 1499–1508 (2002) 18. Sanchez-Todo, R. et al.: Personalization of hybrid brain models from neuroimaging and electrophysiology data (2018) 19. Spiegler, A., et al.: Selective activation of resting-state networks following focal stimulation in a connectome-based network model of the human brain. eNeuro (2016) 20. Jirsa, V.K., et al.: On the nature of seizure dynamics. Brain 137(8) (2014) 21. Rubin, J.E., Terman, D.: High frequency stimulation of the subthalamic nucleus eliminates pathological thalamic rhythmicity in a computational model. J. Comput. Neurosci. 16(3), 211–235 (2004) 22. Castellanos, F.X., Proal, E.: Large-scale brain systems in ADHD: beyond the prefrontal– striatal model. Trends Cogn. Sci. 16(1), 17–26 (2012) 23. Deco, G., Kringelbach, M.L.: Great expectations: using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron 84(5) (2014) ‘
38
Z. Jin
24. Bauer, L.G. et al.: Quantification of kuramoto coupling between intrinsic brain networks applied to fmri data in major depressive disorder. Front. Comput. Neurosci. 16 (2022) 25. Smith, S.M. et al.: Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage 23, S208–S219 (2004). https://fsl.fmrib.ox.ac.uk/fsl/fsl wiki
Human Behavior
An Adaptive Network Model for Learning and Bonding During a Varying in Rhythm Synchronous Joint Action Yelyzaveta Mukeriia1 , Jan Treur1(B) , and Sophie Hendrikse2,3,4 1 Social AI Group, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
[email protected]
2 Department of Clinical Psychology, Vrije Universiteit Amsterdam, Amsterdam,
The Netherlands 3 Institute of Psychology Research, University of Amsterdam, Amsterdam, The Netherlands 4 Methodology and Statistics Research Unit, Institute of Psychology, Faculty of Social and
Behavioural Sciences, Leiden University, Leiden, The Netherlands
Abstract. This paper presents an adaptive network model in the context of joint action and social bonding. Exploration of mechanisms for mental and social network models are presented, specifically focusing on adaptation by bonding based on homophily and Hebbian learning during joint rhythmic action. The paper provides a comprehensive explanation of these concepts and their role in controlled adaptation within illustrative scenarios.
1 Introduction The phenomenon of human synchronization and social bonding has received much interest for researchers from various scientific disciplines for decades. From an early age, humans exhibit a remarkable ability to synchronize their actions, thoughts, and emotions with others, resulting in formation of strong social bonds that shape our interactions and relationships. This innate capacity for synchronization and social bonding plays a vital role in our social development, cooperation, and overall well-being. To model these various dynamics of human interaction in real-world scenarios, it is essential to consider both cognitive processes in the brain and interpersonal relationships within social networks. The adopted network modeling approach used in the paper, serves as a conceptual tool for understanding the structure and dynamics of complex adaptive processes [2]. The proposed network model integrates two fundamental principles: Hebbian learning for progressive changes in connections between mental states (modeling plasticity in the brain) and the Homophily principle for adaptive interpersonal connectedness (modeling adaptive social networks) [3]. The focus of this paper is on modeling the adaptation and bonding process of two individuals who are participating in a tapping activity, in which case, the adaptation happens through the motor control skill, named sensorimotor synchronization (SMS). Broadly speaking, SMS is the temporal coordination of an action with a predictable external event, for instance an external rhythm such as a sequence of musical beats © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 41–52, 2024. https://doi.org/10.1007/978-3-031-53503-1_4
42
Y. Mukeriia et al.
[7]. It commonly occurs within social contexts, where individuals synchronize their movements with sequences produced by others. Instances of synchronized movement in response to live musical performances, such as nodding, clapping, or dancing, offer numerous examples of sensorimotor synchronization (SMS). When individuals engage with music, they naturally develop temporal expectations shaped by the underlying structural regularities, notably the musical beat, which represents a recurring pulse. These regularities often evoke a strong inclination to generate movements that synchronize with the rhythmic patterns [1]. During such joint actions the adaptations taking place involve both connections within internal mental networks and connections within social networks between persons. Within the introduced adaptive network model, the former type of adaptation will be addressed by Hebbian learning and the latter by bonding based on homophily. Hebbian learning, a synaptic plasticity principle proposed by Donald Hebb in 1949, posits that when a neuron A consistently and causally contributes to activating neuron B, their connection strengthens [4]. In simpler terms, Hebbian learning explains how neurons enhance their communication when they frequently fire together, emphasizing the importance of one neuron’s consistent and causal involvement in the firing of another. The homophily adaptation principle in interpersonal relationships highlights that the level of bonding between individuals is influenced by their degree of similarity. People who share more similarities tend to form stronger connections, as observed, for instance, in children’s interactions, where shared demographic characteristics often lead to more frequent friendships and playgroup engagements [6]. To apply these adaptation principles effectively, it is essential to grasp the model’s structure, which involves multiple levels of adaptation. The foundational mental network model operates at the lowest level, initially with given weights of connections between states. To make it more realistic, we introduce a first-order adaptation level, incorporating principles like Hebbian learning and homophily bonding. These principles themselves have default parameters, which can be made dynamic by including second-order adaptation. In this paper, the second-order adaptation is applied to model adaptive adaptation speed for the adaptation speed both of the Hebbian learning and of the bonding by homophily. Such second-order adaptation mechanisms are also referred to by metaplasticity or ‘plasticity of the plasticity’ [8].
2 The Self-modeling Network Modeling Approach Used The modeling approach used is network-oriented and addresses adaptation of networks by self-modeling networks [5]. Within a network model, at each time point t, the impact of state X on state Y is depicted through their connections, as impactX,Y (t) = ωX,Y X(t) where ωX,Y is the weight of the connection from X to Y and X(t) the activation value of X at time t. There could be one or multiple connections impacting a single state, and in order to aggregate them, combination functions cY (..) are used, aggimpactY (t) = cY impactX1 ,Y (t), · · · , impactXk ,Y (t)) = cY (ωX1 ,Y X1 (t), · · · , ωXk ,Y Xk (t)
An Adaptive Network Model for Learning and Bonding
43
Lastly, the speed factor ηY is what determines the rate at which the aggimpactY (t) is exerted on state Y. Thus, the full difference and differential equations are the following [3]: Y (t + t) = Y (t) + ηY aggimpactY (t) − Y (t) t dY (t)/dt = ηY aggimpactY (t)−Y (t) The three types of network characteristics ωX,Y , cY (..), ηY defining a temporal-causal network can be constant, in which case they model a nonadaptive network. However, the realistic case is that they also change over time and therefore the network is adaptive. Self-modelling networks are an easy way to address adaptivity by adding states (called self-model states) to the network that represent these characteristics. For example, a self-model state WX,Y can be added to represent a connection weight ωX,Y , or a selfmodel state HY can be used to represent a speed factor ηY . This can also be done in a higher-order manner, for example, a second-order self-model state HWY can be used to represent the adaptation speed of WX,Y ; see [5] for more details. The following combination functions were used [11]. The identity function id(.) is the combination function that is usually used for the states that have an impacting link from only one state. The numerical representation of the function is cY (V ) = id(V ) = V. The scaled maximum function smaxλ (…) is used for a smooth transition from one value to another. In this paper, it is only applied to a self-modeling state for a combination function. Numerically, it is represented as cY (V 1 ,…,V k ) = smaxλ (V 1 ,…, V k ) = max(V 1 ,…,V k ) /λ where λ is the scaling factor. The advanced logistic sum function alogisticσ,τ (…), is usually used for the states that have impacting connections from multiple states, and can be expressed as (V +···+Vk −τ) cY (V1 , · · · , Vk ) = alogisticσ,τ (V1 , · · · , Vk ) = 1/ 1 + e−σ 1 − 1/ 1 + eστ 1 + e−στ
where σ is a steepness parameter and τ a threshold parameter [10]. The Hebbian function hebbμ (…), is used for adaptation based on Hebbian learning and is expressed for self-model state WX,Y as cWX ,Y (V1 , V2 , W ) = hebbμ (V1 , V2 , W ) = V1 V2 (1 − W ) + μW where μ in [0, 1] is the persistence factor. The simple linear homophily function slhomoα,τ (…), is applied to the homophily adaptation states WX,Y . This function has two parameters: amplification factor α and tipping point parameter τ, and can be expressed as cWX ,Y (V1 , V2 , W ) = slhomoα,τ (V1 , V2 , W ) = W + αW (1 − W )(τ − |V1 − V2 |) The step-once function steponceα,β (…) is used to model changing context factors, and is defined by two parameters: the start point α and the end point β for the simulation time t, during which the state needs to be active. This function is applied in cases when a
44
Y. Mukeriia et al.
state is not needed throughout the whole simulation. In mathematical terms, the function yields a value of 1 if α ≤ t ≤ β, else 0 [10]. The step-modulo function stepmodρ,δ (…) is used to model behavior with constant intervals, using two parameters: the repeated time duration ρ and the transition point δ. The function can be expressed as formula which yields 0 if mod(t, ρ) < δ, else 1 for time t. Lastly, the beats per minute function bpmγ,ϕ (…) generates a pulse-like pattern based on the given values for the amount of beats per minute γ, and the duration of each beat ϕ. Numerically, it can be represented as: 1 if mod(t, (400/γ)) < ϕ, else 0, where 400 is used as the determinant for the amount of beats that are in a unit of time in the model. The higher the value, the greater the amount of time allocated for adaptation and learning processes to occur and unfold effectively.
3 Design of the Adaptive Network Model In this section, the various components of the network model are explained. In the graphical representation of the model shown in Fig. 1, the base level contains an individual mental network for each agent, where the individual nodes represent the various states of the mental process, which are defined in Table 1. The black arrows represent the connections indicating the causal relationship between the states. The starting point is the world state wss , which is the beat. An agent senses the beat sss , which leads to sensory representation srss . At the same time, it senses a beat made by the other agent sse and has a sensory representation srse of it too; the belief state bse serves as the interpretation the agent has of the perceived stimuli e. Taking into account both perceived stimuli, the agent activates preparation state psa , and related execution state (the drumming beat) esa , and then again has a belief state bsa for being aware of the own action. The same process unfolds for the second agent. It is important to note that all of the network connections within the mental network model of each agent, depicted in the model, are involved in standard mental processes that occur within any agent, regardless of the scenario at hand.
Fig. 1. Graphical representation of the adaptive network model
An Adaptive Network Model for Learning and Bonding
45
Table 1. States and their descriptions State nr X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17 X18 X19 X20 X21
State name wss ssA,s srsA,s ssA,e srsA,e psA,a esA,a bsA,e bsA,a ssB,s srsB,s ssB,e srsB,e psB,a esB,a bsB,e bsB,a WbsA, e , psA, a WbsB, e , psB, a WesA,a, ssB,e WesB,a, ssA,e
X22
BPM
X23 X24
Aws Bws
X25
HWbsA, e , psA, a
X26
HWbsB, e , psB, a
X27
HWesA,a, ssB,e
X28
HWesB,a, ssA,e
Explanation World state for stimulus s Sensor state of agent A for stimulus s Sensory representation state of agent A for stimulus s Sensor state of agent A for stimulus e Sensory representation state of agent A for stimulus e Preparation state of agent A for response a Execution state of agent A for stimulus a Belief state of agent A for response e Belief state of agent A for stimulus a Sensor state of agent B for stimulus s Sensory representation state of agent B for stimulus s Sensor state of agent B for stimulus e Sensory representation state of agent B for stimulus e Preparation state of agent B for response a Execution state of agent B for stimulus a Belief state of agent B for response e Belief state of agent B for stimulus a First-order self-model state for connection weight bsA, e , psA, a First-order self-model state for connection weight bsB, e , psB, a First-order self-model state for connection weight esA,a, ssB,e First-order self-model state for connection weight esB,a, bsA,e First-order self-model state for the frequency parameter of the bpm combination function First-order self-model state for phases of the world state ws First-order self-model state for phases of the world state ws Second-order self-model state for speed factor for first-order self-model state WsrsA, e , psA, a Second-order self-model state for speed factor for first-order self-model state WsrsB, e , psB, a Second-order self-model state for speed factor for first-order self-model state WesA,a, ssB,e Second-order self-model state for speed factor for first-order self-model state WesB,a, bsA,e
Level
Base level
First-order self-model level
Second-order selfmodel level
The first-order self-modeling level addresses the Homophily and Hebbian learning adaptation for the weights of the different types of connections, with the blue arrow representing the source of the learning (bse and psa for Hebbian; bse and bsa for Homophily). The red downward arrows point towards the base states that are affected by the learning (psa for Hebbian; the other agent’s sse for Homophily). In order to achieve a varying rhythm, beat for the world state, a first-order self-model state BPM for the combination function bpm (further explained in the next section) was added, with input from two self-model states, Aws and Bws . The second-order self-model level addresses the adaptation HW -states for the speed factors of the first-order adaptation W-states. This is done to have a varying speed of Hebbian learning and Homophily adaptation, since it is not a constant or linear process. These second-order self-model states are based on the adaptation principle formulated by Robinson et al. (2018) [12]: ‘Adaptation accelerates with increasing stimulus exposure’. Stimuli exposure is detectable in activation of the related base states; therefore, the
46
Y. Mukeriia et al.
second-order speed adaptation HW -states receive information from the same base level states as the first-order learning W-states do, plus from the first-order learning W-states themselves. On both of the adaptation levels states have persistence links, which connect the state to itself. It is a necessary link for the learning process, as it assures that previously attained (learnt) values for the states still persist, although often only partially. The speed adaptation HW -states, however, must have only very low persistence link as otherwise it would make it more difficult to attain a context-sensitive effect for the speed adaptation. Next, information about the connection weights between the states (for the strength of the impact), the type of combination function used (to aggregate multiple impacts on the state), and the states’ speed factors (for the timing of the effect of the impact) will be discussed. The numerical values presented in the section are in line with the scenario of an interaction between an adult and a child. In such a scenario, it is expected that the adult would have a bigger influence on the child in the learning and adaptation process, rather than the other way around. This effect is established using the function parameters for Hebbian and Homophily learning, discussed in the aggregation characteristics section. As a default all of the connection weights are set at 1, to later be adjusted in accordance with specific scenarios. Additionally, the various links between the states have different purposes and effects in the learning. There are within-agent and inter-agent links. The former includes the prediction links (psA,a → srsA,e , psB,a → srsB,e ), which model the strength of anticipating the stimuli of another agent, perception links (ssA,e → srsA,e , ssB,e → srsB,e ), which determine how sensitive the agent is to perceiving another agent’s stimuli, belief of perceived stimulus links (srsA,e → bsA,e , srsB,e → bsB,e ), which are responsible for how aware the agent is of the perceived stimulus, and belief of own execution links (esA,a → bsA,a , esB,a → bsB,a ) which determine how the agent is aware of its own actions. The expression links (psA,a → esA,a , psB,a → esB,a ) signify how expressive the agent is in its own actions, for the other agent to be able to perceive it. The various settings of these links and their effects are explored in the evaluation section of the paper. Within Table 2. The cells marked in red correspond to the red downward arrows in the graphical representation of the model and define the adaptive connections weights for the adaptive connections, and hence do not have any predetermined weights values. Their weights values are adapting to the Hebbian and Homophily learning behavior. All of the values for the weights of the links can be found in Table 2. Moreover, the initial values necessary for accurate model replication in future studies can also be found in Table 2. The combination functions and their parameters, are the determining factors for the way the connections leading to each state are aggregated. For both of the agents, the sss , srss , sse , esa , bse , and bsa states the identity function is used. The srse and psa states are modeled using the alogistic function, with the steepness σ set at 20 for both, and the threshold τ set at 0.6 and 0.7 respectively. The Hebbian learning states are modeled using their own combination function, hebb, whose only parameter, the persistence factor μ, is set to 0.85 and 0.9. The adult’s persistence factor is slightly lower, as their learning persistence is expected to be lower.
An Adaptive Network Model for Learning and Bonding
47
Table 2. Role matrix specifications for connectivity, connection weights, and initial values mb base connectivity 1 wss ssA,s srsA,s ssA,e srsA,e psA,a esA,a bsA,e bsA,a ssB,s srsB,s ssB,e srsB,e psB,a esB,a bsB,e bsB,a WbsA, e , psA, a WbsB, e , psB, a WesA,a, ssB,e WesB,a, ssA,e BPM Aws Bws HWbsA, e , psA, a HWbsB, e , psB, a HWesA,a, ssB,e HWesB,a, ssA,e
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17 X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28
X1 X2 X15 X4 X3 X6 X5 X7 X1 X10 X7 X12 X11 X14 X13 X15 X8 X16 X9 X17 X23 X23 X24 X8 X16 X9 X17
2
3
mcw connection weights 4
X6 X8
X14 X16
X6 X14 X8 X16 X24
X18 X19 X20 X21
X6 X14 X8 X16
X18 X19 X20 X21
1
X25 X26 X27 X28
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17 X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28
1 1 X21 1 0.7 1 1 1 1 1 X20 1 0.7 1 1 1 1 1 1 1 30 1 1 1 1 1 1
2
3
4
1 X18
1 X19
1 1 1 1 20
1 1 1 1
1 1 1 1
1 1 1 1
0.2 0.2 0.2 0.2
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17 X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28
ms speed factors 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 X25 X26 X27 X28 10 10 10 1.5 1.5 1.5 1.5
iv initial values 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.3 0.3 0.4 0.4 0 0 0 0 0 0 0
The bonding by homophily adaptation states use the simple logistic homophily slhomo function, where the amplification parameter α is set at 0.35 for the agent A affecting B, and 0.3 for the agent B affecting A. The tipping point τ set at 0.7 and 0.6, respectively. When the dissimilarity between the individuals is less than τ, the strength of their connection will become higher [9]. The speed adaptation states are modeled using the alogistic combination function with the steepness σ and threshold τ being 2 and 2.5 for the Hebbian learning, and 1.5 and 2.5 for the bonding by homophily adaptation. As briefly mentioned in the previous section, the network model has three selfmodeling states which contribute to varying beat frequencies. The Aws and Bws selfmodeling states for the world state use the steponce function to determine when in the simulation they happen. The former happens the first 200 units of time, and the latter the last 200 units of time. These two states have connections to the self-model state BPM, with the weight of these connections determining the frequency of how often the beat happens; the weight of the link from Aws and Bws , are 30 and 20, respectively. In order for the world state to be affected by the self-model state BPM, the bpm function that is used to model the ws state, uses BPM as an adaptive frequency parameter. Thus, the world state ws has an adaptive number of beats per unit of time, and for the specific scenario the duration of the gap between the beats is 1 unit of time.
48
Y. Mukeriia et al.
Finally, each state has a speed factor, found in Table 2 as well, that defines how fast it changes for the given aggregated impact. As default all the states are set to a speed factor of 10. The red cells for the learning states are the adaptive speed factors from the second-order self-model level. These values are not constant; they vary based on the behavior of the HW -states (X25 to X28 ). The HW -states themselves have a constant speed factor, set to a much lower speed factor of 1, in order to provide time for the learning aspect of the network model to happen.
4 Simulation Results The subsequent section showcases example simulation graphs. Within such a graph, each state is depicted by an individual line, portraying its behavior over the time axis. Figure 2 illustrates the base level states for both of the agents and the world state ws, along with its fluctuating beat, noticeable by an apparent change in rhythm at time point 200. It is noteworthy that both agents’ mental states exhibit parallel behavior to the world state, indicating the synchronization of their perception, preparation and execution with the beat. This observation signifies that there is no loss of information between the agents and their stimuli perception, ensuring a solid basis for further adaptation. Figure 3 displays the outcomes of Hebbian learning and homophily adaptation, accompanied by their respective speed adaptation states. The Hebbian states initiate at 0.3 and progressively grow, eventually reaching a final level of approximately 0.7 for agent B and 0.55 for agent A. Notably, the learning process exhibits slightly greater strength, higher values, and reduced variation for agent B (the student), as it was given more robust learning parameters. As for homophily adaptation, the curves demonstrate a more gradual progression, steadily approaching a value of 1 by the end of the simulation. Importantly, these curves do not decline, indicating a consistent growth in adaptation for both the adult and the student. Moreover, considering that the adult exerts a stronger influence on B than vice versa, the Hebbian AB curve appears higher and more pronounced. The speed adaptation states exhibit patterns with high amplitude fluctuations, characterized by alternating increments and decrements in speed values.
5 Model Evaluation and Discussion As part of the model evaluation, a series of tests were conducted to explore the capabilities of the system, seen in Table 3. These tests involved manipulating the connection weights and parameters of the learning principles. Each parameter was modified individually to identify the minimum link value required for satisfactory learning and adaptation. Through this experiment, it was observed that the Hebbian learning process is particularly influenced by the prediction link and the belief of perceived stimulus link. Even slight changes in these values had a substantial impact on the behavior of the states. On the contrary, it was found that the perception sensitivity link, belief of own execution link, and the expressive link could be set to 0 without significantly compromising the effectiveness of Hebbian learning for both agents.
An Adaptive Network Model for Learning and Bonding
Fig. 2. Simulation of all of the base level states
Fig. 3. Simulation of the learning, adaptation, and adaptive speed
49
50
Y. Mukeriia et al.
Regarding homophily adaptation, the belief of own execution link and the expressive link were identified as influential sensitivity links. Additionally, the perception sensitivity link appeared to have the least impact on the learning principles, with a minimum value of 0 resulting in adequate performance. Table 3. Connection weight model evaluation values. Hebbian Learning
Agent Sensitivity
Median weight
Initial weight
Lowest Median weight weight
srsA,e
1
0.3
0.65
1
0
0.5
psB,a
srsB,e
1
0.2
0.6
1
0
0.5
Perception Sensitivity Link
ssA,e
srsA,e
1
0
0.5
1
0
0.5
ssB,e
srsB,e
1
0
0.5
1
0
0.5
Belief of Perceived Stimulus Link
srsA,e
bsA,e
1
0.5
0.75
1
0
0.5
srsB,e
bsB,e
1
0.45
0.73
1
0
0.5 0.5
Belief of Own Execution Link Inter-Agent Sensitivity
Lowest weight
psA,a
Prediction Link
WithinAgent Sensitivity
Homophily Adaptation
Initial weight
Expressive Link
esA,a
bsA,a
1
0*
0.5
1
0
esB,a
bsB,a
1
0*
0.5
1
0
0.5
psA,a
esA,a
1
0
0.5
1
0*
0.5
psB,a
esB,a
1
0
0.5
1
0*
0.5
In the process of parameter testing, a systematic methodology was employed. Each parameter was individually modified, and the corresponding minimum values for the persistence factor μ in Hebbian learning were recorded in Table 3. Similarly, Tables 4 and 5 showcase the lowest attained values for the amplification factor α and tipping point τ. The resulting tables provide valuable reference points, presenting a concise summary of the minimum values attained for each parameter. It is worth noting that in the table, a value of 0* signifies that there is an observable impact on learning, however, is not substantial enough to hinder the learning process significantly, and could even be as low as 0. These findings highlight the varying degrees of importance and sensitivity associated with different connection links, learning principles, and function parameters. This comprehensive analysis provides valuable insights into optimizing the model’s performance and achieving optimal settings for enhanced functionality. It’s worth noting that the explored model is inherently universal, meaning it can be applied across various scenarios. What distinguishes its performance and adaptability in different contexts are the specific network characteristics under which the combination function parameters. These network characteristics allow for tailoring the model to suit a wide range of real-world situations and personal characteristics, making it a versatile tool for understanding and simulating complex adaptive real-world systems. Furthermore, the model’s complexity, while substantial, is essential for capturing the intricate dynamics of sensorimotor synchronization and achieving adaptability across various scenarios. For example, to distinguish a person who only slowly adapt from a person B that adapt fast, the weights of the incoming connections of the H-states of A representing adaptation speed can be
An Adaptive Network Model for Learning and Bonding
51
set lower in role matrix mcwv for connection weights. As another example, for a person who initially adapts very slowly but after some tipping point adapts fast can be modeled by using a high steepness parameter for the related H-states in role matrix mcfpv. Table 4. Hebbian function parameter model evaluation values Hebbian Learning: Function Parameter Alterations Initial value
Lowest value
Hebbian A
Persistence μ
0.85
0.65
Hebbian B
Persistence μ
0.9
0.65
Table 5. Homophily function parameter model evaluation values Bonding by homophily: Function Parameter Alterations Initial value
Lowest α value
Lowest τ value
0.35
0.05
0.35
Homophily AB
Amplif. Factor α Tipping point τ
0.7
0.7
0.1
Homophily BA
Amplif. Factor α
0.3
0.01
0.3
Tipping point τ
0.6
0.6
0.15
In the context of the illustrative scenario presented in this paper, our model extends beyond the simplified version discussed in [3]. Notably, this paper’s model deviates from the one in [3] through the incorporation of additional base-level states, the introduction of belief states for perceived homophily instead of objective homophily, integration of actual rhythm with extended time pulses, adaptive irregular beat, and the inclusion of an adaptive speed parameter for the adaptation process. Regarding the model itself, this study builds upon our previous work [13], with a focus on a related but distinct aspect of the multi-adaptive network model, delving into the dynamics of homophily adaptation within the context of social interactions among multiple individuals. In other earlier work, in [14] it was analysed how persons can synchronize in the context of adaptive joint decision making. Furthermore, in [15] it was analysed how subjective detection of synchrony can play a causal role for bonding. Moreover, in [16] the distinction between short-term affiliation and long-term bonding and their relation to synchronisation was introduced. These papers do not consider homophily like the current paper does.
References 1. van der Steen, M.C., Keller, P.E.: The adaptation and anticipation model (adam) of sensorimotor synchronization. Front. Hum. Neurosci. 7, 253 (2013). https://doi.org/10.3389/fnhum. 2013.00253 . PMID:23772211;PMCID:PMC3677131
52
Y. Mukeriia et al.
2. Treur, J.: Dynamic modeling based on a temporal-causal network modeling approach. Biol. Inspired Cognitive Archit. 16, 131–168 (2016) 3. Accetto, M., Treur, J., Villa, V.: An adaptive cognitive-social model for mirroring and social bonding during synchronous joint action. In: Proceedings of BICA 2018. Procedia Computer Science, vol. 145, pp. 3–12 (2018) 4. Hebb, D.: The Organisation of Behavior. JohnWiley and Sons (1949) 5. Treur, J.: Relating a reified adaptive network’s structure to its emerging behaviour for bonding by homophily. In: Treur, J. (ed.) Network-Oriented Modeling for Adaptive Networks: Designing Higher-Order Adaptive Biological, Mental and Social Network Models, pp. 321–352. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-314453_13 6. McPherson, M., Smith-Lovin, L., Cook, J. M.: Birds of a feather: homophily in social networks. Annual Rev. Soc. 27, 425– 444 (2001) 7. Repp, B.H.: Sensorimotor synchronization: a review of the tapping literature. Psychonomic Bull. Rev. 12, 969–992 (2005). https://doi.org/10.3758/BF03206433 8. Abraham, W.C., Bear, M.F.: Metaplasticity: the plasticity of synaptic plasticity. Trends Neurosci. 19(4), 126–130 (1996). https://doi.org/10.1016/s0166-2236(96)80018-x. PMID:8658594 9. Treur, J.: Network-Oriented Modeling for Adaptive Networks: Designing Higher-Order Adaptive Biological, Mental and Social Network Models. SSDC, vol. 251. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31445-3 10. Treur, J.: Design of a Software Architecture for Multilevel Reified Temporal-Causal Networks (2019). https://www.researchgate.net/publication/333662169 11. Treur, J.: Overview of the Combination Function Library. https://www.researchgate.net/pub lication/336681331 12. Robinson, B.L., Harper, N.S., McAlpine, D.: Meta-adaptation in the auditory midbrain under cortical influence. Nat. Commun. 7, e13442 (2016) 13. Mukeriia, Y., Treur, J., Hendrikse, S.: A multi-adaptive network model for human hebbian learning, synchronization and social bonding based on adaptive homophily. Cogn. Syst. Res. 84, 101187 (2024). https://ssrn.com/abstract=4523058 or https://doi.org/10.2139/ssrn. 4523058 14. Hendrikse, S.C.F., Kluiver, S., Treur, J., Wilderjans, T.F., Dikker, S., Koole, S.L.: How virtual agents can learn to synchronize: an adaptive joint decision-making model of psychotherapy. Cogn. Syst. Res. 79, 138–155 (2023) 15. Hendrikse, S.C.F., Treur, J., Wilderjans, T.F., Dikker, S., Koole, S.L.: On becoming in sync with yourself and others: an adaptive agent model for how persons connect by detecting intraand interpersonal synchrony. Hum.-Centric Intell. Syst. 3, 123–146 (2023) 16. Hendrikse, S.C.F., Treur, J., Koole, S.L.: Modeling emerging interpersonal synchrony and its related adaptive short-term affliation and long-term bonding: a second-order multi-adaptive neural agent model. Inter. J. Neural Syst. 33(7) 2350038 (41 pages) (2023)
An Adaptive Network Model for the Emergence of Group Synchrony and Behavioral Adaptivity for Group Bonding Francesco Mattera1 , Sophie C. F. Hendrikse2,3,4 , and Jan Treur1(B) 1 Social AI Group, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
[email protected], [email protected]
2 Department of Clinical Psychology, Vrije Universiteit Amsterdam, Amsterdam,
The Netherlands [email protected] 3 Department of Psychology Research, University of Amsterdam, Amsterdam, The Netherlands 4 Methodology and Statistics Research Unit, Institute of Psychology, Faculty of Social and Behavioural Sciences, Leiden University, Leiden, The Netherlands
Abstract. The research reported here analyses the relationship between group synchrony and group bonding through a novel adaptive computational dynamical system model. By simulating multimodal interactions within a group of four agents, the study uncovers patterns in group cohesion in the sense of emerging multimodal group synchrony and related group bonding. Findings include patterns for emerging group synchrony (logarithmical) and group bonding (logistic). The obtained insights offer an understanding of group interaction dynamics. Future research may consider larger groups and more variations of synchrony detection functions to widen the obtained findings. Keywords: Group Dynamics · Adaptive Network Model · Interpersonal and Group Synchrony · Group Bonding
1 Introduction Interpersonal and group relationships frequently start from a sense of synchrony, which may arise in many different ways, and is an essential and crucial part of the way we bond with others. One important form of synchrony is called multimodal interpersonal synchrony [2]. This occurs when two or more people spontaneously attune their speech, feelings and movement across time [3, 9]. While previous results have examined this synchrony in pairs of agents computationally, we want to expand our knowledge on the role it plays in dynamics of larger groups: dynamics that involve many more lines of interaction between agents [1]. This paper proposes the idea of group synchrony detection states for verbal action, movement and expressed affective response. These states co-exists with the 2 agents synchrony detection states and are related to behavioural adaptivity in the form of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 53–66, 2024. https://doi.org/10.1007/978-3-031-53503-1_5
54
F. Mattera et al.
group bonding. The scenario for this study focuses on four agents and their multimodel interactions within the group for the three modalities. To analyse this computationally, we developed a novel agent-based adaptive dynamical system model. The latter is a temporal-causal adaptive network, including synchrony and bonding states for both pairs of agents and the whole group. By this, we aim to contribute to the growing field of research on interpersonal synchrony and its relation to behavioral adaptivity underlying bonding. The analysis by the model and the model’s results may be applied to the psychological field, for example, aiming for synchrony to enhance bonding in group formation and group therapy [8, 6].
2 Background Research Behavioral adaptivity underlying bonding, is often linked to the concept of synchrony, which plays a key role in communication and how relationships work [10]. Many studies, especially in the field of psychotherapy, have shown that therapists who consciously synchronize their movements with their clients have a better perspective because they are felt to show insight and understanding [8] and lead to better affiliation or bonding between them [5]. As previous studies on this matter showed how interpersonal synchrony between two agents can predict their cohesion (bonding), this study aims at answering the same question for a group of agents instead. Note that the focus is not on straightforward application of existing methods applied to all dyads within a group and aggregating them, but on whole group phenomena, following literature in this area such as [15–19]. However, next to this whole group perspective, the dyadic perspective on synchrony will be covered as well in the model for the purpose of comparison. In order for us to draw insights from the model, some main assumptions need to be made beforehand. The first assumption is that all agents unconsciously and subjectively detect group synchrony [27]. By that, we mean that all agents are able to perform and detect synchrony patterns, having hypothetical internal states for holding information about it. These detection states will be differentiated to address both pair-wise interpersonal synchrony and group-wise synchrony. The latter can be analysed in an empirical context by applying multivariate synchrony detection methods [20–22] to measured data for movements, expressed emotions or verbal utterances of all group members (not to the dyads), for example. The second main assumption is that bonding between agents happens via (behavioral) adaptivity of connections involved in their interaction. This can be analysed in an empirical context based on observing the quality of coordination, cooperation or affiliation as a group, for example as reported via questionnaires, e.g., [23, 24]. Note that we are building the novel computational model to have pathways between interactions within the group, emerging synchrony patterns, synchrony detection states and the adaptive connections related to their bonding. Overall, the analysed pathways roughly form a circular structure as follows: group interaction → synchrony patterns → detected synchrony → adaptation for bonding → group interaction → ….
An Adaptive Network Model for the Emergence
55
This means that there is not one unidirectional form of causality from one to the other but an adaptive dynamical system describing how both synchrony and bonding can emerge and strengthen via their mutual dynamic and adaptive influences. We believe that studying synchronization within groups and their related behavioural adaptivity from an adaptive dynamical system perspective will provide a better understanding of the dynamics and adaptivity of human groups and add new perspectives to the field.
3 Network Representations for Adaptive Dynamical Systems In this section, we discuss the modeling approach for adaptive dynamical systems by their self-modeling temporal-causal network representation as described in [14, 28]. This approach has been used for the adaptive dynamical system model introduced here. Temporal-causal networks models describe a number of states that can change their value overtime, like in our simulations. These types of models can be represented in a conceptual representation, as in Fig. 3 and Fig. 4 (a labeled graph), or in Fig. 2 (explanation of the states). The network characteristics specifying these models are listed here: Connectivity Characteristics: These define how different states X and Y of the model are connected by some weight ωX,Y . Aggregation Characteristics: These define how each state applies a combination function cY on every state Y to aggregate its incoming impacts. Timing Characteristics: These define for each state Y a speed factor ηY controlling how fast it changes overtime. Based on these network characteristics, the following difference equation is used to compute from a current time t the impact of all states with incoming connections on state Y at later time t + t: (1) Y (t + t) = Y (t) + ηY cY ωX1 ,Y X1 (t), . . . , ωXk ,Y Xk (t) − Y (t) t Here, Xi are the states from which state Y gets incoming connections. To model adaptive networks, the self-modeling principle (also called reification) is used. This means that for any adaptive network characteristic a self-model state is added to the network that represents it. For example, for an adaptive connection weight ωX,Y , a self-model state is WX,Y added that represents this weight and for an adaptive speed factor ηY , a self-model state HY is added representing it; see [14]. Then within Eq. (1) the values of these self-model states are used for the network characteristics, for example: Y (t + t) = Y (t) + HY (t) cY WX1 ,Y (t)X 1 (t), . . . , WXk, ,Y (t)Xk (t) − Y (t) t (2) This can also be done iteratively. For example, the speed of a self-model state Z = WX,Y can be made adaptive by adding a self-model state HZ = HWX ,Y . Thus an adaptive learning rate can be introduced, that can make learning context-sensitive. Such self-model states HWX ,Y are called second-order self-model states whereas self-model states WX,Y are called first-order self-model states. This can be used to model first-order and second-order adaptation of networks.
56
F. Mattera et al.
For the adaptive network model introduced in this paper, the bonding both between pairs of agents and for a group, we made it adaptive [11, 13]. This means that 2 selfmodeling levels were added on top of the base level network model, as shown in Fig. 1. Starting from the bottom, we have a base level network holding the connections between agents and their synchrony states. The first-order self-model level has states for group bonding and bonding in pairs. Moreover, for each agent, the second-order self-model level holds a single state that controls the adaptation speed of the bonding states below it. In Fig. 1, the downward arrows represent the effectuation of the different orders of adaptiveness.
Fig. 1. Three-level network architecture used for the introduced network model for group synchrony and group bonding to obtain a second-order adaptive network
4 A Network Model for Group Synchrony and Group Bonding In this study, we focus on a group of four agents with (1) their emerging group synchronization and (2) their group bonding related to it [4]. To accomplish this, we have included for each agent a novel type of state for group synchrony detection to our base model that represents that the agent detects synchrony of all agents within the whole group. We designed the model in a way that each agent is able to sense and process three types of modalities: verbal action, movement and expressed affective response. The design of the adaptive agent model uses a network architecture consisting of three levels, as explained earlier in Sect. 3, see Fig. 1. These levels include a base level, a first-order self-modeling level, and a second-order self-modeling level, with states as explained in Fig. 2: for each agent 39 states at the base level, 4 at the first-order and 1 at the second-order adaptation level. Since our study involves 4 agents designed in the same way, all with the same number and type of states, we will focus only on how Agent A is structured and how it interacts with the other agents of the experiment. The base level structure of Agent A is shown in Fig. 3. In Fig. 4 it is shown how the different agents interact via three modalities. To facilitate understanding, we refer to Fig. 3 and 4 to showcase the connections between base-level states in agent A. Figure 2 shows interpersonal group synchrony detection states and their connections as red diamonds with four incoming connections
An Adaptive Network Model for the Emergence
57
Fig. 2. Explanation of all states for Agent A
each for all four agents. In the same figure, purple diamonds represent interpersonal dyadic synchrony between pairs of agents, only having 2 inputs for each state for the two agents of the pair. Finally, the green circles represent sensor states (X4 to X12 ), sensory representation states (X13 to X21 ), preparation states (X34 to X36 ) and executing states (X37 to X39 ). An understanding of how agents interact between each other at the base level can be found in Fig. 4, each agent has 9 incoming and 3 outgoing connections: Incoming connections: for each other agent (B, C, D), agent A has sensor states for verbal action (v), movement (m) and expressed affective response (b). Outgoing connections: agent A executes verbal action (v), movement (m) and expressed affective response (b). The first-order self-modeling level, located in the middle, is responsible for holding the information of how different agents bond modeled by W-states X40 to X43 . These W-states adaptively change connection weights of the base level (between states X4
58
F. Mattera et al.
Fig. 3. The base level of the network architecture for Agent A
to X12 ). More specifically, Fig. 5 and 6 show how base states from X13 and X21 (for sensory representations) and from X37 to X39 (for action executions) have the weights of their incoming connections regulated by the first-order adaptation level. In this setup, the first-order adaptation level receives incoming connections from the base level (in particular from the synchrony detection states), as explained in Sect. 3. The top level of the network, on the other hand, holds a single HW -state per agent (X44 for Agent A) that is responsible for controlling the adaptation process of the the bonding W-states present in the level just below it. Put simply, the adaptive network model uses a system that has different levels to detect emerging synchrony and predict group bonding. The middle level focuses on understanding and representing how bonding occurs based on synchrony detection, thereby in turn changing connection weights of the base level. Finally, the top level is responsible for controlling the adaption speed of the middle level W-states. For the way in which the middle level W-states affect the connection weights at the base level, two options have been considered in a comparative manner: the W-states WB,A , WC,A , WD,A based on dyadic synchrony detection or the W-states WBCD,A based
An Adaptive Network Model for the Emergence
59
Fig. 4. How the different agents interact
on group synchrony detection. These two options are depicted respectively in Fig. 5 and Fig. 6.
5 Simulation Results We’ve conducted experiments based on a model that simulates the emerging synchrony that occurs among a group of four friends, and how it can predict their group cohesion and bonding. We’ve completed approximately 20 iterations of this model with our four simulated individuals. Each simulation ran for 180 units of time, having each unit divided into 0.5 subunits. This means that each simulation incorporated 360 individual calculation steps. All 20 simulations followed the same framework regarding the type and length of the random elements involved: a 40 unit period with no stimulus influences acting on the four individuals, succeeded by an equivalent 40 unit period with influences. No matter the circumstances, the simulated individuals were always able to sense each other’s verbal action, movement and expressed affective response. We will dive into 2 representative simulations that showed common patterns, frequently seen in the 20 iterations. In the Appendix [25] the full specification can be found: by role matrices Fig. A1 to Fig. A5. Base Level Regulated by Synchrony Detection for Pairs As discussed before in Sect. 3, the states are connected between each other as specified in Fig. A1 in [25]. Most of these connections all have unit weights, as shown in Fig. A3 in [25]. For the ones that are adaptive, this first simulation used the interpersonal synchrony states to regulate those connections. A full 3D visualization of the model architecture can be seen in Fig. 6. The aggregration characteristics are specified by 4 different combination functions: alogistic:
It has parameters for steepness and excitability threshold. It was used for all states aside the synchrony ones and the world stimulus.
60
F. Mattera et al.
Fig. 5. 3D network visualization using detection of group synchrony
Fig. 6. 3D network visualization using detection of synchrony in pairs
swsm:
It has parameters for identification state number and length of (the sliding) window. It was applied on all synchrony states between pairs of agents, working by normalizing both
An Adaptive Network Model for the Emergence
61
input series (achieving nser1 and nser2), to then apply the following: 1 − mean(|nser2 − nser1|). group relative max-min: This is a novel combination function built specifically for the group bonding states. It has the same parameters as swsm, but as it has 4 input series and not only 2, it uses a relative difference formula on their rows: 1 - (max(row) − min(row))/max(row). random stepmodopp: It has parameters for repetition and step time. It was used for the world stimulus, it acts by applying random noise to all agents. All their parameters are shown in the Appendix [25] in Fig. A5. Also, the initial values for all the states and their respective speed can be seen in Fig. A4. Results for the whole experiment having base level connections adapted by the interpersonal synchrony states in pairs of agents can be seen in Fig. 7. The lower graph shows the curves for the group bonding (in green), bonding in pairs (in blue) and the second-order level H-states (in purple). From it, we can observe how bonding in group takes more time to reach convergence on its plateau than the bonding in pairs. Also, the curve shapes between the 2 seem to be quite different, with the group level states following a logistic curve and the states of pairs of agents follow more of a cubic curve. Clearly, the ways in which humans bond between each other is fundamentally different, when being in group. Finally, in Fig. 8 we have a comparison between group synchrony states (in blue) and the group bonding (in green). From it, we can determine that it is possible to predict the bonding from group synchrony, since their curves are of similar shape. Note that the plateaus in the curves relate to the absence of the repetitive stimulus that is used to trigger episodes of group activity. Base Level Regulated by Group Synchrony States For this setup, the matrix mcw shown in [25] Fig. A3 is changed. This time, we use the group synchrony states of each agent to control the base level connections. In agent A, for example, the matrix shown in 9 would have X40 for all the adaptive states, instead of X41 , X42 or X43 . Aside this, all other parameters and combination functions were kept the same. A full 3D visualization of the model architecture can be seen in Fig. 5. The simulation timing and the curve colors are kept consistent with each other. Having the adaptive base level connections regulated by the group synchrony instead of synchrony in pairs seem to show some different results. In Fig. 9. Upper graph, we have a full graph of the simulation results. In Fig. 9, lower graph we see how the curve representing group bonding, bonding in pairs or the H-states change. Bonding in pairs seem to be more fluctuant overtime, reaching convergence in about the same timing as in Sect. 5.1. The group bonding follows a kind of logistic curve (same as before) but reaches lower plateau values between 80 and 120 time points. We have a comparison of synchrony and bonding in this setup as well (Fig. 10). Even when using different synchrony detection functions influencing the adaptive connections, it is possible to predict the group bonding from group synchrony.
62
F. Mattera et al.
Fig. 7. Base level regulated by dyadic synchrony states (Colour figure online)
Fig. 8. Base level regulated by dyadic synchrony states (Colour figure online)
6 Discussion This section aims to interpret the findings from our simulations and identify potential directions for future research. Our findings indicate how we can predict behavioural adaptivity manifested as group bonding from interpersonal group synchrony. We have attempted to represent and understand this complex phenomenon using a second-order adaptive network model.
An Adaptive Network Model for the Emergence
63
Fig. 9. Base level regulated by group synchrony states
Fig. 10. Base level regulated by group synchrony states
Interpretation of Results In our simulations, we observed a clear relationship between group synchrony and group bonding. We found that as group synchrony increased, the level of bonding within the group also increased. This connection was demonstrated in two scenarios: one with the base level connections regulated by the synchrony in pairs, and one where group synchrony was used. In the scenario with synchrony in pairs, we observed that bonding reached high values (around 0.8) as the synchrony within the group increased (between time points 80 and 120). The increase in group bonding was smooth and followed a logistic curve. On the other hand, the bonding in pairs had a less smooth nature.
64
F. Mattera et al.
In the scenario with group synchrony, the dynamics of bonding were slightly different. While the relationship (and their curve shapes) between synchrony and bonding remained consistent, bonding within the group reached a lower plateau value between 80 and 120 time points. This indicates that adapting the base model with different synchrony states will impact the final group bonding. In both scenarios, it has been proven possible to predict group bonding from group synchrony. In previous work, in [26] it was analysed how persons can synchronize in the context of adaptive joint decision making. In [27] it was analysed how subjective detection of synchrony can play a causal role for bonding. Finally, in [28] the distinction between short-term affiliation and long-term bonding and their relation to synchronisation was introduced. These papers focus on interpersonal synchrony in dyads and do not consider group synchrony and group bonding like the current paper does. Future Research This work has presented a novel computational model that has shown promise for understanding interpersonal group synchrony and its relation to group bonding. However, there are many directions for future research that could improve our knowledge on it. Firstly, it would be valuable to investigate how varying the size of the group influences the dynamics of synchrony and bonding. In this study, we focused on a group of four agents. Future work could explore different group sizes and configurations to see if the observed patterns remain consistent, confirming this paper’s simulations findings. Also, different types of combination functions could be used on this model. Specifically, the group relative max-min function designed for the group bonding states’ values could be improved. Finally, an interesting next step would be to get empirical data over time and compare them to the simulation data. For example, a video recording of a joint dance activity may be analysed by a synchrony analysis method (e.g., [20]) and compared to reported affiliation.
References 1. Abbas, H.A., Shaheen, S.I., Amin, M.H.: Organization of multi-agent systems: an overview. arXiv preprint arXiv:1506.09032 (2015) 2. Delaherche, E., Chetouani, M., Mahdhaoui, A., Saint-Georges, C., Viaux, S., Cohen, D.: Interpersonal synchrony: a survey of evaluation methods across disciplines. IEEE Trans. Affect. Comput. 3(3), 349–365 (2012) 3. Louwerse, M.M., Dale, R., Bard, E.G., Jeuniaux, P.: Behavior matching in multimodal communication is synchronized. Cogn. Sci. 36(8), 1404–1426 (2012) 4. Mataric, M.J.: Designing and understanding adaptive group behavior. Adapt. Behav. 4(1), 51–80 (1995) 5. Miles, L.K., Lumsden, J., Flannigan, N., Allsop, J.S., Marie, D.: Coordination matters: Interpersonal synchrony influences collaborative problem-solving. Psychology (2017) 6. Ramseyer, F., Tschacher, W.: Nonverbal synchrony of head-and body-movement in psychotherapy: different signals have different associations with outcome. Front. Psychol. 5, 979 (2014)
An Adaptive Network Model for the Emergence
65
7. Richardson, D., Dale, R., Shockley, K.: Synchrony and swing in conversation: Coordination, temporal dynamics, and communication. Embodied Commun. Hum. Mach., 75–94 (2008) 8. Rothgangel, A.S., Braun, S.M., Beurskens, A.J., Seitz, R.J., Wade, D.T.: The clinical aspects of mirror therapy in rehabilitation: a systematic review of the literature. Int. J. Rehabil. Res. 34(1), 1–13 (2011) 9. Rotondo, J.L., Boker, S.M.: Behavioral synchronization in human conversational interaction. Mirror Neurons Evolut. Brain Lang. 42, 151–162 (2002) 10. Tschacher, W., Rees, G.M., Ramseyer, F.: Nonverbal synchrony and affect in dyadic interactions. Front. Psychol. 5, 1323 (2014) 11. Wayne, G.D.: Self-modeling neural systems. Columbia University (2013) 12. Yu, B., Venkatraman, M., Singh, M.P.: An adaptive social network for information access: theoretical and experimental results. Appl. Artif. Intell. 17(1), 21–38 (2003) 13. Yu, W., Chen, G., Cao, M., Kurths, J.: Second-order consensus for multiagent systems with directed topologies and nonlinear dynamics. IEEE Trans. Syst. Man Cybern. Part B (Cybernet.) 40(3), 881–891 (2009) 14. Treur, J.: Network-oriented modeling for adaptive networks: Designing Higher-order Adaptive Biological, Mental and Social Network Models. Springer Nature Publishing, Cham, Switzerland. (2020). https://doi.org/10.1007/978-3-030-31445-3 15. Rothenberg-Elder, K.: Transitional rituals for group initiation and group bonding. In: Farewell and new beginning: The Psychosocial Effects of Religiously Traditional Rites of Passage, pp. 205–225. Springer Fachmedien Wiesbaden, Wiesbaden (2023) 16. Spoor, J.R., Kelly, J.R.: The evolutionary significance of affect in groups: communication and group bonding. Group Process. Intergroup Relat. 7(4), 398–412 (2004) 17. Tarr, B., Dunbar, R.I.: The evolutionary role of dance: group bonding but not prosocial altruism. Evol. Behav. Sci. (2023). https://doi.org/10.1037/ebs0000330 18. von Zimmermann, J., Vicary, S., Sperling, M., Orgs, G., Richardson, D.C.: The choreography of group affiliation. Top. Cogn. Sci. 10(1), 80–94 (2018) 19. Wood, C., Caldwell-Harris, C., Stopa, A.: The rhythms of discontent: synchrony impedes performance and group functioning in an interdependent coordination task. J. Cogn. Cult. 18(1–2), 154–179 (2018) 20. Hudson, D., Wiltshire, T.J., Atzmueller, M.: MultiSyncPy: A Python package for assessing multivariate coordination dynamics. Behav. Res. Methods. Res. Methods 55(2), 932–962 (2023) 21. Meier, D., Tschacher, W.: Beyond dyadic coupling: the method of multivariate Surrogate Synchrony (mv-SUSY). Entropy 23(11), 1385 (2021) 22. Richardson, M.J., Garcia, R.L., Frank, T.D., Gergor, M., Marsh, K.L.: Measuring group synchrony: a cluster-phase method for analyzing multivariate movement time-series. Front. Physiol. 3, 405 (2012) 23. dos Santos, N.R., Figueiredo, C., Pais, L.: Development and validation of the organisational cooperation questionnaire. Eur. Rev. Appl. Psychol. 70(4), 100555 (2020) 24. León-del-Barco, B., Mendo-Lázaro, S., Felipe-Castaño, E., Fajardo-Bullón, F., IglesiasGallego, D.: Measuring responsibility and cooperation in learning teams in the university setting: validation of a questionnaire. Front. Psychol. 9, 326 (2018) 25. Appendix available as linked supplementary data at https://www.researchgate.net/publication/ 372829620 26. Hendrikse, S.C.F., Kluiver, S., Treur, J., Wilderjans, T.F., Dikker, S., Koole, S.L.: How virtual agents can learn to synchronize: an adaptive joint decision-making model of psychotherapy. Cogn. Syst. Res. 79, 138–155 (2023)
66
F. Mattera et al.
27. Hendrikse, S.C.F., Treur, J., Wilderjans, T.F., Dikker, S., Koole, S.L.: On becoming in sync with yourself and others: an adaptive agent model for how persons connect by detecting intraand interpersonal synchrony. Hum.-Centric Intell. Syst. 3, 123–146 (2023) 28. Hendrikse, S.C.F., Treur, J., Koole, S.L.: Modeling emerging interpersonal synchrony and its related adaptive short-term affliation and long-term bonding: a second-order multi-adaptive neural agent model. Inter. J. Neural Syst. 33(7) 2350038 (41 pages) (2023)
Too Overloaded to Use: An Adaptive Network Model of Information Overload During Smartphone App Usage Emerson Bracy1 , Henrik Lassila2 , and Jan Treur3(B) 1 Computer Science, Colby College, Concord, NH, USA
[email protected]
2 Computer Science, Aalto University, Espoo, Finland
[email protected]
3 Computer Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
[email protected]
Abstract. In this paper, a first-order adaptive self-modeling network model is introduced to model information overload in the context of cyclical usage of smartphone apps. The model consists of interacting attention resources and emotional responses to both attention taxation and the app engagements. The model makes use of first-order reification to simulate the agent’s learning of the connections between app engagement and emotional responses, and strategic use of attention resources. Furthermore, external factors, such as context and influence of the environment to use the apps, are included to model the usage decision of the agent. Simulations in two scenarios illustrate that the model captures expected dynamics of the phenomenon. Keywords: Adaptive Network Model · Information Overload · App Usage
1 Introduction Information overload is often referred to as a major source of distress in contemporary world by both public media and in the academic literature (Arnold et al. 2023; Rutkowski and Saunders 2018; Stephens 2018). However, the models of the phenomenon tend to be either descriptive or statistical (e.g., Graf and Antoni 2023; Rutkowski and Saunders 2018) with a lack of computational formalism. Meanwhile, smartphone apps are used to facilitate travel, education, entertainment, and more. A study found that a normal person has around 40 apps installed on their phone, while average person only uses 18 of those apps for 89% of the time (Kataria 2021). Information overload caused by excessive information supply through apps can lead to users experiencing stress and ultimately for them to stop using the apps (Pang and Ruan 2023; Ye et al. 2023). Since smartphone apps are a major channel of receiving information in everyday life, it is important to understand its psychological effect on the average person, that is, the information overload they cause. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 67–79, 2024. https://doi.org/10.1007/978-3-031-53503-1_6
68
E. Bracy et al.
In this paper, a computational model of information overload while interacting with multiple smartphone apps is proposed. An adaptive network model was designed that makes use of first-order learning, which is linked to emotional responses and attention capacity in relation to the app usage. The usage of the apps reflects how much they capture users’ attention resources and arouses their emotional valence. Meanwhile, the emotional responses and the available attentional resources determines the user’s decision to continue using the app. Model also incorporates effects of the environmental influences to use the apps, such as notifications, and the contextual factor which levels the attention and emotions. By simulating different scenarios, the model illustrated expected dynamics of attention, emotions, and behavior as described in the literature on information overload (Graf and Antoni 2023; Rutkowski and Saunders, 2018). In the scenarios, user engagement with the apps initially arouses positive emotions in the user, but once the overuse of multiple attention taxes the attention capacities, the emotional responses turn negative. Furthermore, the cognitive and emotional dynamics result in user disengagement with the apps, as is expected.
2 Background In this section, we present background of the phenomenon and propose a research question. Information overload refers to “a state of being overwhelmed by information, where one perceives that information demands exceed one’s information processing capacity” (Graf and Antoni 2023, 2). Outcomes of information overload include stress, fatigue, poor task performance, and information avoidance (Graf and Antoni 2023). Information overload has been studied extensively, and causes, such as work environment and communication channel richness, and interventions, such as emotional coping training, have been proposed (Arnold et al. 2023). Information overload is often theorized in terms of limitations in human cognitive processing capacities, that is, the limitations of working memory or attention. However, many qualitative attempts to theorize information overload include descriptions of both cognitive and emotional processes and outcomes (Belabbes et al. 2023; Graf and Antoni 2023; Pang and Ruan 2023; Rutkowski and Saunders 2018). Emotional-Cognitive Overload Model (Rutkowski and Saunders 2018) presents the building blocks of the model for information overload. ECOM includes information and the request to use information technology as the inputs of the model, cognitive processing – which consists of short-term memory chunking and long-term memories of past emotional-cognitive overload experiences – as the mental process, and cognitive overload (e.g., leaving part of the task undone, poorer decisions, shedding tasks) and emotional overload (e.g., stress, frustration) as the outcomes. Prior computational model of information overload (Gunaratne et al. 2020) considers only the attention limitations as the process of the information overload. The approach presented in this paper seeks to integrate the cognitive and emotional aspects to model information overload.
Too Overloaded to Use: An Adaptive Network Model
69
3 Network-Oriented Modeling The network model presented in this paper adopts the network-oriented modeling approach. In this section, a brief introduction to network-oriented modeling approach is given. Generally, in this approach, network model is represented with a graph where nodes represent states of the modeled phenomenon, and the dynamics of the state changes are modeled by designating directed links between nodes with assistance of link weights, functions that map the values from sending nodes to receiving node, and speed factors, which determine how fast the sending nodes influence the state of the receiving node. More formally, temporal-causal network architecture is defined by the following characteristics (Treur 2020): • Connectivity of the network A connection weight ωX,Y for each connection from state (or node) X to state Y. • Aggregation of the multiple connections in the network A combination function cY (..) for each state Y determining the aggregation of incoming impacts from other connected states. • Timing in the network A speed factor ηY for each state Y determining the speed of change from incoming impacts. The model dynamics can be simulated with execution of following difference equation to each state Y on each timestep t: Y (t + t) = Y (t) + ηY cY ωX1 ,Y X1 (t), . . . , ωXk ,Y Xk (t) −Y (t) t (1) This generic difference equation based on the above characteristics has been implemented in MATLAB software (see Treur 2020, Ch. 9). Based on this, simulations are run by declaring the network characteristics of the model in the software and the software procedurally executes the difference equation for all states in parallel. The model is defined using role matrices which designate each specification of the network characteristics ωX,Y , cY (..), and ηY for each of the states in the network in a standard table format. The role matrices specified for the model presented in this paper can be found from the Appendix A (available as Linked Data at https://www.researchgate.net/public ation/373490260). The combination functions used in network-oriented modeling and implemented in the software are called basic combination functions. For any model, any number m of them can be selected for the model design. The standard notation for them is bcf1 (..), bcf2 (..), …, bcfm (..). The basic combination functions use parameters π1,i,Y , π2,i,Y , such as μ, σ, τ in the basic combination functions, which further define the combination function characteristics. The basic combination functions used in the current model and their parameters are presented in Table 1. Using this notation, the combination function can be written as follows: cY t, π1,1 (t), π2,1 (t), . . . , π1,m (t), π2,m (t), V1 , . . . , Vk (2) γ (t)bcf1 (π1,1,Y (t),π2,1,Y (t),V1 ,...,Vk )+...+γm,Y (t)bcfm (π1,m,Y (t),π2,m,Y (t),V1 ,...,Vk ) = 1,Y γ (t)+...+γ (t) 1,Y
m,Y
70
E. Bracy et al.
Substituting the combination function into the above defined Eq. (1) gives the formula:
(3) The above characteristics form the base-level architecture for the network model. However, many phenomena in the world are adaptive. To incorporate the adaptive characteristics into the network model, the principle of reification of the network model (also called self-modeling) is introduced. The adaptive characteristics are added to the model in a form of self-model states. In the case of the first-order adaptive network, the selfmodel states include states that represent the network characteristics ωX,Y , cY (..), and ηY of the base-level network. For example, the model presented in this paper makes use of self-model states WX,Y , which represents adaptive weight ωX,Y of connection from state X to state Y aggregated by Hebbian learning function, and TY , which represents the basic combination function excitability threshold τY . The reification level is visualized in Fig. 1. Similarly, all of the network characteristics can be reified: HY (self-model state of speed factor ηY ), WX,Y (self-model state for the connection weight), Ci,Y (selfmodel state of the combination function weight), and Pi,j,Y (self-model state for basic Table 1. The combination functions used in the network model. Notation
Formula
Parameters
Advanced logistic sum
alogisticσ,τ (V 1 ,…, V k )
[ – 1 στ ](1 1+e−σ(V1 +...+Vk −τ) 1+e + e–στ )
Hebbian Learning
hebbμ (V 1 , V 2 , V 3 )
V 1 V 2 (1 – V 3 ) + μV 3
V 1 , V 2 activation levels of connected states; V 3 activation level of first-order self-model state representing the connection weight, Persistence factor μ
Identity function
id(V )
V
Activation level of state V
0 if mod(t, α) < β,
Time t Repeated time duration α, Duration β until value 1,
Random step randstep-modα,β (..) function
1
else 21 + rand2(1,1) ; rand() function returns a random draw from uniform distribution
Steepness σ > 0, Excitability threshold τ
Too Overloaded to Use: An Adaptive Network Model
71
combination function parameters) are the standard notations of the reification states for the adaptive characteristics of the network. Replacing now the network characteristics in (3) with the corresponding self-model state values gives the following:
(4) The self-model states change based on the three network cracteristics that were presented above, and by self-model state values changing they alter the behavior of the base-level network. The reification can be applied to implement reifications of multiple order (first-order, second-order, third-order, …) as has been shown in Treur (2020, Ch. 8). However, these are out of the scope for the current model.
4 Adaptive Network Model of Information Overload In this section, the model design is presented. The goal of the model was to simulate how cognitive limitations and emotional responses together interact to generate effects of the information overload (Rutkowski and Sanders 2018), that is, stress, information avoidance, and disengagement from the apps (Graf and Antoni 2023; Pang and Ruan 2023). In the context of app usage, we interpreted this to be exemplified in continuous negative emotions and disengagement with the apps that otherwise would be enjoyable to the user. Next, the architecture of the network model is described. The nodes of the model are listed and explained in Table 2, and the full architecture is illustrated in Fig. 1. Unless specified differently, all the nodes described below use advanced logistic function as the combination function. Detailed specification of the role matrices is presented in the Appendix A. Tattn
Wemo1
Wemo3 Wemo2
First reification level
Base level
Fig. 1. Architecture of the adaptive network model
First, the user needs to have a source of information that they attend to. In the model, three apps were implemented to account for the effect of different numbers of information
72
E. Bracy et al.
sources. In the model, app nodes represent the engagement with the app by the user, and ranges from 0 to 1 indicating none to full engagement. Although the current includes three app nodes, the model could be updated by implementing different number of app nodes. Second, engaging with any object requires the user to attend to it. This requirement is facilitated by the attention node, whose values range from 0 to 1. A value of 0 indicates that the user has the most attention available to divert to their app usage, whilst 1 means that the user has no attention left to use the smartphone apps. The attention node is the crux of the entire model, for it dictates how invested the user can be. The attention node is connected to the app engagement negatively since they divide the attentional resources. To simulate a natural form of replenishment and longevity of a user’s attention, a context node was also added. The context node represents the specific situation in which the user is attending to the apps, and thus changing context restores the attention by attention re-allocation. The same applies to emotions since context also facilitates how the emotions change. Thus, the context node is connected to the attention node as well as the emotion nodes. The model treats context as a constant, and thus the identity function was used as its combination function. Table 2. Explanations of the nodes of the network model
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17 X18
State app1 app2 app3 attn emo1 emo2 emo3 act 1 act 2 act 3 cont inf1 inf2 inf3 Wemo1 Wemo2 Wemo3 Tattn
Explanation Agent's engagement with application 1 Agent's engagement with application 2 Agent's engagement with application 3 Agent's attention capacity Agent's emotional response to application 1 Agent's emotional response to application 2 Agent's emotional response to application 3 Agent's decision to use application 1 Agent's decision to use application 2 Agent's decision to use application 3 Context influence on the agent Influence from the environment to use application 1 Influence from the environment to use application 2 Influence from the environment to use application 3 Weight self-model state for connection from app1 to emo1 Weight self-model state for connection from app2 to emo2 Weight self-model state for connection from app3 to emo3 Excitability of attn threshold parameter
Level
Base level
First reification level
The emotion nodes represent the emotional valence that the app nodes cause on the user. The user’s emotions drive their interest in their smartphone app usage. This can be seen as the user’s (dis)enjoyment of the app. A value of 1 would indicate that the app evokes very positive emotions for the user, whilst a value of 0 would indicate that the app evokes very negative emotions for the user; 0.5 means neutral emotional response. Furthermore, since information overload is hypothesized to be caused by the over taxation of the cognitive resources, the attention node has a high impact on the
Too Overloaded to Use: An Adaptive Network Model
73
emotion nodes, meaning that the more the attention is taxed, the more negative emotions the user will have. The action node represents the user’s decision to use the app. The purpose of the action node is to regulate the respective app node. Higher the decision to use the app, the more the user will engage with it. The action nodes are influenced by the attention nodes, the emotion nodes, and the influence nodes. Thus, the decision to use the app is a combination of the user’s positive emotion toward the app, the availability of the attentional resources, and the strength of the environment’s influence to use the apps. The influence node dictates the environment’s role in using the app. There are many types of influences that the environment can pose on individuals to use some apps. These include notifications, work context, social influence, and peer pressure, or contextdependent need. The influence node represents the total sum of environment-based influences to the user ranging from 0 to 1. For the influence nodes, random step mod function was used that simulates the effects of stochastic activation of the influence. Practically, it means that environment acts on the individual in periods of time while being inactive else and the activation level of the influence is stochastically determined. The adaptive elements of the network are rooted in the Wem nodes and τattention . State Wem is a first-order adaptive weight self-model state which dictates the strength of the connection between the app and emotion nodes by applying Hebbian learning. The more that an app is used, the greater the connection between the app engagement and the emotion that is evoked from the app, thus there is a greater impetus for the user to use the app in the case where the emotion evoked is positive. The state τattention is used to represent the attention node’s ability to increase its capacity for using multiple apps. While the threshold of the attention function adapts through repeated usage, the attention is taxed less in relation to the app usage. As apps are being used more and more, the user can distribute their attention more easily between apps without becoming overloaded by app overuse.
5 Simulation Results Two simulated scenarios are presented to illustrate how the model works. In scenario one, only one smartphone app is active and interacting with the user. In the second scenario, three parallel apps are active and interacting with the user in parallel illustrating a common situation where the user needs to allocate their attention between multiple apps. For more simulations that illustrate the model behavior in different scenarios, see Appendix B. In both scenarios it is assumed that the apps the user engages with influence the positive emotions in the user, that is, the user likes the apps. It is also assumed that initially the user has neutral emotional relation to the apps (Em1–3 = 0.5), and that each app is equally engaging, equally emotion provoking, and equally attention taxing, that is, the weight of the connections in the network are equal for the three apps. These parameters are defined in more detail in Appendix A. Scenario 1: One application active. The Figs. 2 and 3 present the simulation results of the scenario one with different time frames. Figure 2 shows how in the beginning (0–5 t) the user starts by using
74
E. Bracy et al.
the app (Act1 ; blue dash-dotted line). Next (5–10 t) the user engagement with the app (App1 ; blue solid line) gives rise to the positive emotions (Em1 ; blue dashed line), while engaging with the app taxes the user’s attention (Attention; purple line). Furthermore, there is an outside influence to use the app (Inf app1 ; dotted line) which strengthens the user’s next decision to use the app. After a while (10–15 t) the user disengages with the app (App1 ), which results in attention resources recovering slightly (Attention). What follows is a series of engaging and disengaging with the app each followed by taxing and recovering of the attention resources.
Fig. 2. Base level states of simulation of scenario one (50 time steps)
Fig. 3. Base level states of simulation of scenario one (200 time steps)
Figure 3 shows the results of the same simulation with a longer time frame. Figure 3 shows the same dynamical behavior as Fig. 2 but what is easier to perceive here is the influence from the environment (Inf app1 ; dotted line). Within each 60 time step interval
Too Overloaded to Use: An Adaptive Network Model
75
(60 t, 120 t, 180 t, …), the influence gradually increases and then drops which results in users disengaging from the app (App1 ) slightly more. What is also visible from the figure is the general trend of the emotional response (Em1 ) losing the positive valence and converging to the neutral zone (Em1 ≈ 0.5). Scenario 2: Three applications active. Figures 4 and 5 present simulation results from scenario two with the similar time frames to the previous section. As Fig. 4 shows, in the beginning (0–5 t) user starts to use the apps (Act1–3 ; blue, red, and yellow dash-dotted lines). After this (5–10 t), the user engages with the apps (App1–3 ; blue, red, and yellow solid lines) which is followed by the combined positive emotional response (Em1–3 ; blue, red, and yellow dashed lines). Furthermore, there are influences from the environment to use the three apps to which the user’s decision to use the apps increases (Inf app1–3 ; blue, red, and yellow dotted lines). The app engagements (App1–3 ) are followed by the proportional attention taxation (10–15 t; Attention, purple line). As there are more apps than in scenario one, the attention taxation (Attention) is significantly higher which leads to stronger disengagement (App1–3 ) and negative emotional response (15–25 t; Em1–3 ).
Fig. 4. Base level states of simulation of the scenario two (50 time steps)
What is shown in Fig. 5 is the results of the same simulation in a longer time frame. The figure shows that when the user engages with three apps (App1–3 ), the constant attention taxation (Attention) is higher than in the single app case. The emotion lowering is also steeper, and the overall outcome is that the emotions converge towards negative emotions (Em1–3 < 0.5). Excitability of the attention. Another feature of the model that is not visible in the sub-1000 time frame is the effect of the excitability of the attention node. Figures 6 to 8 shows how the excitability changes the behavior of the model in a long time frame in scenario two. The rise of the excitability factor (τattention ; black dashed line) during the 1000 first time steps gradually increases the threshold of the advanced logistic function of the attention node (Attention; purple line) which can be seen in Fig. 6.
76
E. Bracy et al.
Fig. 5. Base level states of simulation of the scenario two (200 time steps)
Fig. 6. Activation levels of the attention node in simulation of the scenario two (10 000 time steps).
Meanwhile, rise of excitability factor (τattention ) results in major change in the emotional responses (Em1–3 ; blue, red, and yellow dashed lines) which is visible in Fig. 7. Figure 8 shows that as the excitability factor and emotion connection weights are increased (Wapp-em ; purple, pink, and green solid lines), the attention reacts less to the engagement with the apps and engaging with the apps evokes more positive emotions, which in turn leads to more engagement with the apps (App1–3 ; blue, red, and yellow solid lines). Increased threshold shows how learning attention results in increase of the available attention resources and less negative emotions due to attention limits not being constantly exceeded. In fact, after the 1000 time steps the model exhibits constant positive emotions again related to the app usage.
Too Overloaded to Use: An Adaptive Network Model
77
Fig. 7. Activation levels of the emotion nodes and the reification level nodes in simulation of the scenario two (10 000 time steps).
Fig. 8. App engagement and excitability factor dynamics in simulation of the scenario two (10 000 time steps).
6 Discussion In the literature, information overload is often defined as a state in which an individual cannot process all the information available in the situation due to cognitive limits of information processing capacity, which leads to negative emotional reaction and poorer performance (Graf and Antoni 2023; Rutkowski and Saunders 2018). The proposed model formalizes the key components of the individual information overload as a process where attention limitations and emotional responses interact to produce engaging and disengaging behavior related to the information sources, for instance, smartphone apps. As smartphones are ubiquitous in modern society and one of the main sources of
78
E. Bracy et al.
information for individuals, the presented model exhibits one realistic case of information overload. The results of the simulation scenarios seem to demonstrate that the proposed model captures the key dynamics of the information overload in the case of using several smartphone apps. Both scenarios simulate similar behavior where the user seeks to disengage with the apps when the emotions start to become negative due to the taxation on attention resources. In the situations where the taxation of attention resources starts to reach the bearable limit (Attention ≈ 0.8) in the model, the positive emotions elicited by the engagement with the apps are not enough to keep the user’s emotional level positive but rather leads to negative emotions and stronger likelihood of disengagement. The model also suggests that information overload can in this limited case be overcome with learning. As the simulation with 10 000 time steps shows, the first-order adaptive excitability of the attention node’s function threshold affects how the attentionemotion dynamics can be altered. As the threshold increases, the maximum attention taxation is lowered to level of 0.7 and the fluctuation of the attention is dampened. The natural interpretation of this effect is that as the user adapts to use three apps in parallel, the attention resources are used more and more strategically leading to less over taxation. This effect is well-known in psychology as the role of expertise in attention allocation and better chunking abilities (Pulido 2021). For the future work, the parameters of the model could be adjusted to simulate different types of scenarios. For example, if the apps would have different levels of engaging features (i.e., some engage users more than the others), this could be modeled by adjusting the connection weights, function parameters, and the speed factors related to the engagement nodes. As an example, one can think of engaging features of short-form video apps and contrast them with the ones of a calculator. By adjusting the connection weights between the app engagement and the attention capacity, one can model performance in situations where the apps have different levels of attention requirements (e.g., intense gaming apps vs. photo gallery). Furthermore, the model parameters can be fitted to account for individual differences and different environmental situations. By adding app nodes, the model can be simulated in the situation where the number of apps varies. By adjusting the first-order reification level components, one can simulate different individual characteristics such as emotional sensitivity (i.e., Wem states) or expertise (i.e., τattention parameter). In the future, the model can be improved by including further details from the ever-growing body of literature on human cognition. Some further examples of specifications and simulation results are presented in the Appendix available as Linked Data at https://www.researchgate.net/publication/373490260.
References Arnold, M., Goldschmitt, M., Rigotti, T.: Dealing with information overload: a comprehensive review. Front. Psychol. 14, 1122200 (2023) Belabbes, M.A., Ruthven, I., Moshfeghi, Y., Rasmussen Pennington, D.: Information overload: a concept analysis. J. Documentation 79(1), 144–159 (2023) Graf, B., Antoni, C.H.: Drowning in the flood of information: a meta-analysis on the relation between information overload, behavior, experience, and health and moderating factors. Eur. J. Work Organ. Psy.Psy. 32(2), 173–198 (2023)
Too Overloaded to Use: An Adaptive Network Model
79
Gunaratne, C., Baral, N., Rand, W., Garibay, I., Jayalath, C., Senevirathna, C.: The effects of information overload on online conversation dynamics. Comput. Math. Organization Theory 26, 255–276 (2020) Kataria, M.: App Usage Statistics 2022 that’ll Surprise You (Updated). SIMFORM (2021). https:// www.simform.com/blog/the-state-of-mobile-app-usage/ Pang, H., Ruan, Y.: Can information and communication overload influence smartphone app users’ social network exhaustion, privacy invasion and discontinuance intention? A cognition-affectconation approach. J. Retail. Consum. Serv. 73, 103378 (2023) Pulido, M.F.: Individual chunking ability predicts efficient or shallow L2 processing: Eye-tracking evidence from multiword units in relative clauses. Front. Psychol. 11, 607621 (2021) Rutkowski, A.F., Saunders, C.: Emotional and cognitive overload: the dark side of information technology. Routledge (2018) Stephens, D.: Overload: How technology is bringing us too much information. CBS News. (April 1, 2018). https://www.cbsnews.com/news/overload-how-technology-is-bringing-ustoo-much-information/ Treur, J.: Network-oriented modeling for adaptive networks: designing higher-order adaptive biological, mental and social network models. Springer Nature, Cham (2020). https://doi.org/ 10.1007/978-3-030-31445-3 Ye, D., Cho, D., Chen, J., Jia, Z.: Empirical investigation of the impact of overload on the discontinuous usage intentions of short video users: A stressor-strain-outcome perspective. Online Inf. Rev. 47(4), 697–713 (2023)
Consumer Behaviour Timewise Dependencies Investigation by Means of Transition Graph Anton Kovantsev(B) ITMO University, 16 Birzhevaya Lane, 199034 Saint Petersburg, Russia [email protected] Abstract. The investigation of consumption behaviour on the level of every single person or some certain groups put forward some new tasks different from the behavioural analysis of the whole population. One of them is the problem of temporal peculiarities of consumer behaviour, for instance, how to find those, who react on some critical events faster than the others. It could be useful for identifying a focus-group which would show the tendency and help to make more accurate predictions for the rest of the population. A graph-based method of consumer’s behaviour analysis in the state space is developed in this research. The moments when the deviations from the usual behavioural trajectory occur are detected by incremental comparing the transition graph with its previous state. These moments collected for all customers help to separate the population by the delay time of their reaction to the critical situation. It’s also noticed that the velocity of the reaction is a personal feature of a customer, hence, this separation stays actual for different external events which cause the behavioural anomalies.
Keywords: Transition graph Reaction time
1
· State space · Consumption behaviour ·
Introduction
In this research we propose a graph based method of timewise dependencies investigation in consumer behaviour in crisis situations caused by some external events. Usually these events ruin the previous plans and forecasts and put forward a problem of adjusting the prediction models in the face of enormous uncertainty. This is why it is important to know as soon as possible the main tendency in the certain economic process. Here we deal with the consumer’s transactions in Russia in 2020 and in 2022 when those critical events happened. Analysing the customer’s behaviour in general for the whole population we can notice these periods of stationarity loss in the cumulative expenses time series which is shown in Fig. 1. The cumulative daily expenses of 10,000 customers by major areas of interest are depicted. We can also suggest that not all clients react on lockdown in 2020 and military affairs in 2022 simultaneously. Common sense and observations make us suggest c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 80–90, 2024. https://doi.org/10.1007/978-3-031-53503-1_7
Consumer Behaviour Timewise Dependencies
81
Fig. 1. Cumulative expenses of all customers.
that some of them changed their behaviour earlier than the others but in the same way.So, this group of the “quick reaction clients” can show how the rest of the population will react some time later and hence they become a leading group which shows the new tendency in the crisis behaviour. Our solution is based on the dynamic graph analysis of the behavioural trajectory in the state space for every single customer. It helps to recognise abnormal behaviour and mark the time when it happens. In fact we deal with the number of different Hawkes processes with wide volatility of their rates along the time. Statistical methods to trace them are complicated and cannot perform the sufficient quality whereas the grapf methods applied to such processes could be fruitful, as it is shown by Paul Embrechts, Matthias Kirchner in [2]. So, the easiest way to recognise the deviations in customer’s behaviour is to trace the days when the dynamic graph becomes not isomorphic to itself the day before. As we have found out, it works better and faster than a more complicated approach with the graph neural network. We can also calibrate this instrument on fully predictable anomalies like calendar holidays. Our intuition was that the same people who got started to prepare for the holiday earlier reacted to the critical events earlier. It was asserted within our experiments. Thus, we can separate the population according to the reaction time and distinguish a group of quick reaction clients which is about a quarter of the whole population. Besides, we can approximately calculate the time delay between these two groups by means of cross-correlation analysis. This time delay is helpful for forecast adjustment of the remaining three quarters.
2
Related Works
Many modern behaviour researchers pay attention to the timewise aspects of consumers’ reaction to external circumstances. Mostly they deal with the time spent for purchasing, but the conclusions are valuable for our research because they support the hypothesis that time-related features in consumption behaviour remain persistent for every customer and represent some personal characteristic.
82
A. Kovantsev
For instance, exploring the temporal aspect of consumption behaviour Jackob Hornik proposes a model that rests on the theory of choice under uncertainty [6]. Nevertheless, the conclusion of this article shows that some personal features of the customers persist over time. The approaches to the timewise aspects of consumption behaviour are also worth observing in this research. Riccardo Guidotti and his colleagues go further and design a model based on individual and collective profiles which contain temporal purchasing footprint sequences [4]. Chenhao Fu with coauthors investigate spatio-temporal characteristics and influencing factors of consumer behaviour [3]. They notice that by affecting consumer preferences, residents’ social and economic attributes lead to the differences among consumers’ spatial and temporal behaviour. They use structural equation modelling to further mine the relationships between influencing factors of spatio-temporal behaviour that are related to path relationships and decisionmaking processes. Dr. Ling Luo proposes a systematic approach for tracking the customer behaviour changes induced by the health program [8]. According to this approach customer preferences are extracted from the transaction data and a temporal model for customer preferences is constructed so that the varying customer preference can be modelled via the series of preference matrices, taking the temporal domain into consideration. Victoria Wells, Marylyn Carrigan, and Navdeep Athwal in their research [11] pay attention to the behaviour changes during the critical process on the example of pandemy. Their analysis using foraging theory offers an explanation that locates these behaviours as both logical and understandable within the context of the pandemic, based on changing environment, constraints and currency assumptions. These works show that there are definite persistent peculiarities in the behaviour of every single customer including the temporal features which we are going to investigate. Our confidence is supported by the prospect theory, which forms a basis for individual consumer neuroscience, and includes an overview of the most relevant concepts in consumer research and behavioural economics, such as the framing effect, the phenomenon of anchoring and the endowment effect, the phenomenon of temporal discounting, as well as decision-making under risk. All of them are related with the brain structures and processes, as Sven Braeutigam, Peter Kenning explain in their book [1]. Graph-based methods are also applied to behavioural analysis. Katsutoshi Yada and the other authors of [13] discuss how graph mining systems are applied to sales transaction data so as to understand consumer behaviour. They propose to represent the complicated customer purchase behaviour by a directed graph retaining temporal information in a purchase sequence and apply a graph mining technique to analyse the frequent occurring patterns. Constructing a recommender system Weijun Xu, Han Li and Meihong Wang enhanced the graph-based approach by using multi-behavior features [12]. They develop a new model named Multi-behavior Guided Temporal Graph Attention Network (MB-TGAT), tailor a phased message passing mechanism based on graph neural network and design
Consumer Behaviour Timewise Dependencies
83
an evolution sequence self-attention to extract the users’ preferences from static and dynamic perspectives, respectively. Zhiwei Li with colleagues in their research [7] use a temporal graph constructed on skeletal key points of a human shape to recognise abnormal behaviour of the people in video sequences. The spatio-temporal graph convolutional network (ST-GCN) is used to extract the temporal and spatial features of the skeleton, which are inputted into softmax to predict action classes. So, graph-based methods are popular for the consumption behaviour analysis and particularly in its temporal aspect. It’s surprising that we failed to find an attempt to separate the consumers by the reaction time on the critical events. Here we try to fill up this omission.
3
Data Description
In our research we deal with the transaction data on debit cards of clients of a commercial bank in Russia for January 2018 - July 2022. It contains information about 10,000 active consumers’ expenses, which is sampled daily. Payment categories, standardly marked in transactional data by MCC (Merchant Category Code), referring to the place of payment of the code category are brought into line with three basic values: “survival”, “socialisation”, “self-realisation”. The mapping of MCC to the values is described in the article by Valentina Guleva et al. [5]. The cumulative expenses of all consumers look like the time series which are shown in Fig. 1 Despite the data itself cannot be provided for public access, this data-set is chosen because it includes the periods of significant critical events in MarchApril 2020 and in February-March 2022, which are mostly interesting for this research. Thus, we obtain the 10,000 time sequences of three-component vectors of every-day behaviour for each customer. The example of such a sequence is plotted in Fig. 2.
Fig. 2. Transaction sequences for a single customer.
84
A. Kovantsev
Fig. 3. Trajectory in the state-space for a single customer. Numbers next to the points are the ordinal numbers of days
4
Transition Graph Construction and Applying
The method which we use to construct a dynamic transition graph for the transactional sequence for every single customer is similar to that proposed for the continuous time series by Dmitriy Prusskiy et al. in [10]. The difference is that we do not need to reconstruct the state space because those three components of the transactional sequence for “survival”, “socialisation”, “self-realisation” already define a state space (abbreviated SSS-state space) of the consumer behaviour dynamics. The every-day expenses by three basic values trace the consumer’s trajectory in this space, as it is shown in Fig. 3. Being agglomerated into clusters by the distance between them the trajectory points in the SSS-state space space become the nodes of a graph network while the transition along a trajectory in time determines its directed edges. As the amount usually spent by different clients may significantly differ, for the sake of the reasonable number of clusters and nodes in the graph, the distance between points for hierarchy clustering has been chosen by an automated elbow method based on a geometric approach proposed by Eduardo Morgado et al. in [9]. The
Consumer Behaviour Timewise Dependencies
85
Fig. 4. Transition graph along the whole period for a single customer. The numbers of nodes correspond the order in which the behaviour trajectory hits the certain points in the state space.
example of the initial graph for one of the customers is presented in Fig. 4. This initial graph is constructed on the first part of the sequence. Further to make it dynamic every new point in the behavioural trajectory is checked for proximity to one of the clusters, nodes of the graph. If the distance is less than that defined when the initial graph has been created the new point is added to the node, otherwise a new node appears and a new edge connects it with the node where the previous point is. This procedure continues incrementally. We represent the history of transactions as the time sequence of graphs. Every time when a new node or an edge appears we may suspect an anomaly in customer behaviour. In other words, if once on the new time step the dynamic graph occurs not isomorphic to itself on the previous step, we conclude that the customer behaviour has changed. At first following the method of [10] or [7] we applied the graph neural network (GNN) to detect the differences in the graphs. This consists of three convolutional layers for node embeddings, a readout layer of global mean pooling and a final classifier with 12 dropout and linear transform for the result. It’s precisely similar to that used in [10] where the architecture and parameters were chosen in the experimental studies. But in out work we made sure that the isomorphism check works much faster and provides better results. An isomorphism of temporal graphs on the time step t : G(t) and t − 1 : G(t − 1) is a bijection between the vertex sets of these graphs f : V (G(t − 1)) → V (G(t) such that any two vertices u and v of G(t − 1) are adjacent in G(t − 1) if and only if f (u) and f (v) are adjacent in G(t). The graphs with which we deal in our
86
A. Kovantsev
Fig. 5. Number of customers whose graph significantly changed at a certain date.
research are weighted and directed, so the notion of isomorphic may be applied by adding the requirements to preserve the corresponding of edge directions, node weights. But the tests show that it is not necessary. The point is that we should only detect a deviation from the usual trajectory of behavior, regardless of the way it occurs. Of course, the circumstances of the personal life of any client make him deviate from the usual trajectory rather often and these anomalies may not be caused by some critical events which affect the whole population. But if such an anomaly appears simultaneously in the behaviour of a large group of consumers, it could be a sign of some change in the environment, which we tend to trace.
5
Timewise Dependencies Investigation
The first step of the experiment was the initialising the transition graphs for all customers in the data set. We used the first fifty-day period so that the monthly and weekly seasonalities get included into the normal state of the consumers’ behaviour. Then incrementally day by day the new transactional data was added and on each step the self-isomorphism of the graph was checked. The date when the dynamic graph occurs not isomorphic to its previous state was saved in memory for the certain client. It took about 80 minutes for a humble personal desktop computer to recognise the special dates for 10,000 clients over 1638 days. Thus we obtain a set of dates with abnormal behaviour for every customer. The next step is to count the clients with abnormal behaviour for each day. The result of this calculation is shown in Fig. 5. The plot of total population expenses is also shown there to compare with calculated numbers. We may notice that during the periods of our interest in 2020 and in 2022 the increasing number of customers with abnormal behaviour precedes the changes in total consumption tendency. Now the collected sets of dates may be used for calibrating the customers. We do not have labels for normal and deviant consumer behavior, but we can assume that certain holidays tend to trigger these deviances. As we noticed before, the calendar holidays are fully predictable crises which give us an opportunity to
Consumer Behaviour Timewise Dependencies
87
see which of the clients start preparing in advance. We choose the International Women’s Day celebrated annually on March 8 which is very popular in Russia and besides it is a day off. In Fig. 5 we can see that the most of behavioural anomalies happens exactly around this date. For definiteness we took March 8, 2021, when no crisis happened. We assumed that the early customers started preparing beforehand and sought those who behave abnormally from March 3 till March 5. Of course, the date range influences the set of the customers which we want to obtain as a result. The wider range provides the more clients, the earlier dates may have less connection with the holiday. These dates were chosen by the series of experiments. So, 2519 customers were separated as the early ones. The group of customers who were selected by abnormal behaviour during this period indeed was the leading group, what we found out when comparing their cumulative expenses to that of the residue customers for the critical period of February 2022 as it is shown in Fig. 6. The time series data was rescaled for better timewise feature comparison.
Fig. 6. Cumulative expenses of “early” and “late” customers in February-March 2022. Rescaled for better timewise feature comparison.
In order to evaluate the time delay between slow and fast customers we applied cross-correlation analysis and calculated the Pearson rank correlation coefficient for these two time-series with the time lag from two weeks earlier to two weeks later. The coefficient values by the time lag are presented in Fig. 7, which shows that the maximal correlation is for the three-day lag. This is how the early ahead of the late.
88
A. Kovantsev
Fig. 7. Cross-correlation between “early” and “late” customers shows three-day outrunning for “early” ones.
These three days may be somehow connected with our choice of the calibration dates which were from three to five days before the holiday. But nevertheless, our approach showed a quite sufficient result. The same result was when we turned to the critical days of 2020. As we can see in Fig. 8 early customers group shows the change of tendency even with the greater advance of 6–7 days in the beginning of the growth around March 9, 2020.
Fig. 8. Cumulative expenses of “early” and “late” customers in February-March 2020. Rescaled for better timewise feature comparison.
As it is mentioned in the Related works section, we could not find any practice or research for the task of separating the customers by reaction velocity. So, the only way to evaluate the proposed method is to make sure that the results are relevant to the problem and useful for the forecasting practice, and the requirements for computing resources are not so great.
Consumer Behaviour Timewise Dependencies
6
89
Conclusion and Future Work
In this research a dynamic graph based method was developed to recognise the response time of the consumer behaviour to the critical events for the groups of quick and slow reacting clients. The idea of the method is that incremental graph isomorphism analysis can detect the abnormality in every customer’s behaviour for each day of the period. The dates of calendar holidays, which are the days when the majority of the population change their usual behaviour, help to distinguish the groups of fast and slow responding clients. This temporal characteristic of a single customer as well as of a certain group persists that approves the hypothesis that it is a personal customer’s feature which is either changing much slower than the observation time or even constant. The time delay between the groups of fast and slow customers is about three days for the data set which is explored and the ratio of quick and slow in population is about 1 to 3, respectively. As the method is incremental it can be used in the real-time mode adding new transactional data day by day for early detection of trend changes in consumption behaviour. For the future work we plan to improve the calibration procedure to choose the customers in the leading group more precisely and to reduce the number of clients in it. It would be also good to choose the customers with the greater outrunning. Besides, it would be interesting to try some other graph features to see whether they could help to recognise the abnormal behaviour more accurately. Acknowledgements. This research is financially supported by the Russian Science Foundation, Agreement 17-71-30029, with co-financing of Bank Saint Petersburg, Russia.
References 1. Braeutigam, S., Kenning, P.: Individual consumer neuroscience (2022). https:// doi.org/10.1093/oso/9780198789932.003.0007 2. Embrechts, P., Kirchner, M.: Hawkes graphs. Theor. Probab. Appl. 62, 132–156 (2018). https://doi.org/10.1137/S0040585X97T988538 3. Fu, C., Zhou, S., Yan, X., Liu, L., Chen, W.: Spatio-temporal characteristics and influencing factors of consumer behavior in retailing centers: a case study of guangzhou in guangdong province. Dili Xuebao/Acta Geographica Sinica 72, 603–617 (2017). https://doi.org/10.11821/dlxb201704004 4. Guidotti, R., Gabrielli, L., Monreale, A., Pedreschi, D., Giannotti, F.: Discovering temporal regularities in retail customers’ shopping behavior. EPJ Data Sci. 7, 1–26 (2018). https://doi.org/10.1140/epjds/s13688-018-0133-0 5. Guleva, V., Kovatsev, A., Surikov, A., Chunaev, P., Gornova, G.: Value-based modeling of economic decision making in conditions of unsteady environment. Sci. Tech. J. Inf. Tech. Mech. Opt. 23, 121–135 (2023). https://doi.org/10.17586/22261494-2023-23-1-121-135 6. Hornik, J.: The temporal dimension of shopping behavior. J. Serv. Sci. Manag. 14, 58–71 (2021). https://doi.org/10.4236/jssm.2021.141005
90
A. Kovantsev
7. Li, Z., Zhang, A., Han, F., Zhu, J., Wang, Y.: Worker abnormal behavior recognition based on spatio-temporal graph convolution and attention model. Electronics (Switzerland) 12, 2915 (2023). https://doi.org/10.3390/electronics12132915 8. Luo, L.: Tracking purchase behaviour changes (2020). https://doi.org/10.1007/ 978-3-030-18289-2 4 9. Morgado, E., Martino, L., Mill´ an-Castillo, R.S.: Universal and automatic elbow detection for learning the effective number of components in model selection problems. Digital Sig. Process. Rev. J. 140, 104103 (2023). https://doi.org/10.1016/j. dsp.2023.104103 10. Prusskiy, D., Kovantsev, A., Chunaev, P.: Dynamic transition graph for estimating the predictability of financial and economical processes (2023). https://doi.org/10. 1007/978-3-031-21127-0 39 11. Wells, V., Carrigan, M., Athwal, N.: Pandemic-driven consumer behaviour: a foraging exploration. Mark. Theory (2023). https://doi.org/10.1177/ 14705931231175695 12. Xu, W., Li, H., Wang, M.: Multi-behavior guided temporal graph attention network for recommendation (2023). https://doi.org/10.1007/978-3-031-33380-4 23 13. Yada, K., Motoda, H., Washio, T., Miyawaki, A.: Consumer behavior analysis by graph mining technique. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3214, pp. 800–806. Springer, Heidelberg (2004). https:// doi.org/10.1007/978-3-540-30133-2 105
An Adaptive Network Model for a Double Bias Perspective on Learning from Mistakes within Organizations Mojgan Hosseini1 , Jan Treur1(B) , and Wioleta Kucharska2 1 Department of Computer Science, Social AI Group, Vrije Universiteit Amsterdam,
Amsterdam, Netherlands [email protected], [email protected] 2 Faculty of Management and Economics, Gdansk University of Technology, Gdansk, Poland [email protected]
Abstract. Although making mistakes is a crucial part of learning, it is still often being avoided in companies as it is considered as a shameful incident. This goes hand in hand with a mindset of a boss who dominantly believes that mistakes usually have negative consequences and therefore avoids them by only accepting simple tasks. Thus, there is no mechanism to learn from mistakes. Employees working for and being influenced by such a boss also strongly believe that mistakes usually have negative consequences but in addition they believe that the boss never makes mistakes, it is often believed that only those who never make mistakes can be bosses and hold power. That’s the problem, such kinds of bosses do not learn. So, on the one hand, we have bosses who select simple tasks to be always seen as perfect. Therefore, also they believe they should avoid mistakes. On the other hand, there exists a mindset of a boss who is not limited to simple tasks, he/she accepts more complex tasks and therefore in the end has better general performance by learning from mistakes. This then also affects the mindset and actions of employees in the same direction. This paper investigates the consequences of both attitudes for the organizations. It does so by computational analysis based on an adaptive dynamical systems modeling approach represented in a network format using the self-modeling network modeling principle. Keywords: learning from mistakes · double bias · adaptive network model
1 Introduction Mistakes, a natural part of being human, evoke both positive learning experiences and negative consequences. This duality gives rise to the framing effect, a cognitive bias that influences how mistakes are perceived and impacts decision-making. Often beliefs that mistakes have negative consequences dominate within both bosses and employees. However, if due to this biased type of belief a boss avoids situations where mistakes can be made, for employees another belief occurs, namely that a boss never makes mistakes and therefore they should behave in a similar way. This is called the double bias effect [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 91–103, 2024. https://doi.org/10.1007/978-3-031-53503-1_8
92
M. Hosseini et al.
In organizational settings, understanding this double bias is crucial to unlock untapped potential and foster a culture of growth and resilience. The transformative learning theory emphasizes the importance of critical reflection on personal experiences for profound adult learning. However, the negative framing effect can hinder individuals from gaining valuable insights. [2] Avoiding or overlooking mistakes leads to missed opportunities for growth and triggering negative resource spirals. Addressing this double bias is essential as it significantly affects collective intelligence within organizations. The presence of a double bias regarding mistakes leads to a cultivation of fixed mindsets in society. Mindsets are psychological constructs that shape individuals’ perceptions and understanding of the world. Growth mindsets are learning-oriented, valuing progress and resilience, while fixed mindsets are image-oriented, seeking constant validation of self-perfection. Leaders with fixed mindsets avoid the risk of making mistakes to maintain their positive self-image, leading to a crisis of transformational leadership. Organizations dominated by fixed mindsets and a refusal to accept mistakes as learning opportunities face challenges in developing shared collective intelligence. This organizational shared mindset significantly influences the ability to learn and adapt to changes, which is considered organizational intelligence [3]. Understanding and addressing the double bias and promoting growth mindsets can foster a culture of learning, adaptability, and transformational leadership within organizations. In this paper, we highlight the value of embracing mistakes as powerful learning resources and make a computational analysis of it using an adaptive network modeling format.
2 Modeling Adaptive Networks as Self-modeling Networks Network-oriented modelling is a useful approach to analyse adaptive and dynamic aspects of networks and pathways of causal connections. In a network nodes X and Y (also called states) are used and connections between them. These nodes serve as state variables with real number values X(t) and Y(t) varying over time t. A temporal-causal network model is defined by the following network characteristics: • Connectivity Characteristics: connections from a state X to a state Y have a connection weight ωX,Y which indicates the strength of the connection. • Timing Characteristics: for all states Y, a speed factor ηY indicates how fast the state is changing upon causal impact. • Aggregation Characteristics: for all Y states, a combination function cY (..) is referred to which aggregates how the causal impacts from other states affect Y. The canonical difference equation used for any state Y to compute from time t the impact by all incoming connections for Y at later time t + t is: (1) Y (t + t) = Y (t) + ηY cY ωX1 ,Y X1 (t), . . . , ωXk ,Y Xk (t) − Y (t) t where the Xi are the states from which state Y gets incoming connections. So, from a dynamical system modeling perspective, for the dynamics of each of the state variables Y of the model a difference equation is used which is an instantiation (by substitution of the relevant network characteristics ω, η, and c) of the canonical difference Eq. (1).
An Adaptive Network Model
93
For example, the model introduced in Sect. 3 will have 64 states Y, therefore it models a dynamical system by using 64 difference equations for its 64 state variables which all are instantiations of (1) for the specific network characteristics ω, η, and c for the states. For adaptive networks, some of these network characteristics can change over time. For example, a connection weight ωX,Y can change due to learning. Using the selfmodelling principle [3–5], a self-model state WX,Y can be added to the network that represents the value of ωX,Y . Similarly, the excitability threshold τY of a logistic sum combination function can change over time, making a state Y more sensitive or less sensitive in responding to causal impact. Again by the self-modelling principle a selfmodel state TY can be added to the network that represents the value of τY . Or a speed factor ηY can change, for example due to changing contextual circumstances. By the selfmodelling principle, a self-model state HY can be added to the network that represents the value of ηY . In this way the network gets a self-model of part of its structure; this is also called network reification [3–5]. In such a case, in Eq. (1) for adaptive characteristics the values of the self-model states are used, for example, WX,Y (t) instead of ωX,Y . A dedicated software environment based on these network characteristics and the self-modelling principle was used to conduct simulation experiments; see Ch 9 of [3]. Two combination functions are used in the model, the alogistic function and the function hebb for hebbian learning. See Table 1 for these functions. Table 1. The two combination functions used and their parameters Name
Notation
Advanced alogisticσ, τ(V1 , logistic sum …, Vk )
Hebbian learning
hebbμ (V1 , …, Vk )
Formula
1 1 + e−στ − 1+e στ −σ V +···+V −τ ( ) 1 k 1+e 1
V1 V2 (1 − V3 ) + μ V3
Parameters p(1) = steepness σ p(2) = threshold τ p(1) = persistence factor μ
3 Setup of the Computational Analysis The presence of the ‘double bias of leaders’ mistakes’ can have profound implications for an organization’s learning, innovativeness, and sustainability. The positive attitude towards learning and the negative attitude towards mistakes cause trouble with learning from mistakes. Such contradictory perspectives create a cognitive bias that is doubled in organizations by the belief that bosses never make mistakes - that altogether jeopardizes organizational learning. This analysis refers to the existence of two opposing belief when leaders make mistakes: the belief that bosses should never make mistakes and the opposite belief that they are prone to errors like anyone else. These beliefs can trigger the
94
M. Hosseini et al.
construction of a fixed mindset among leaders, leading to a culture of domination and hindered growth. As a result, the root of sustainability problems within the organization can be traced back to how these biases impact leaders’ learning behaviors. Sustainability achievement in organizations requires constant learning and adaptivity [7]. The computational analysis is based on modeling the mindset of the boss by a set of context factors for their beliefs and attitudes and using two mental models for performing a task: mental model 1 for simple (mistake avoiding) performance and mental model 2 for more complex performance (with more risk for mistakes). The knowledge for these two mental models is in their connections which are represented by self-model states, following [5]. In order to show the connectivity of the model between them better, they are shown in an architecture to describe and distinguish the dynamics within the network: base level (lower plane), first-order self-model level (middle plane) and second-order self-model level (upper plane); see Fig. 1. This is an architecture with two variants, one for the boss B and one for employees E, with similar states on the planes. Mindset of the Boss and of Employees A set of context factors describes either a limited or expanded mindset of the boss, addressing attitudes and beliefs for mistakes, acceptance of mistakes, openness for learning from them, and so on; see Table 2. Both boss B and employees E have these states for their personal characteristics. They are represented in their base planes, one for the boss and the other for the employee. To indicate the effect of boss’s mindset on the employees’ mindset by contagion from boss to employees, the one for the boss has connections to the one for the employees as can be seen in Fig. 2 [4]. Mental Model 1: Simple Tasks Base level states a_m1, b_m1, c_m1 and g_m1 are mental model states in mental model m1 that represent tasks to achieve result g which are simple for the boss (B) and employees (E), no mistakes are made and therefore nothing is learnt. For the sake of simplicity, these mental model states are assumed connected as follows: a_m1 → b_m1 → c_m1 → g_m1 Mental Model 2: Complex Tasks Base level states d_m2, e_m2, f_m2 and g_m2 are mental model states in mental model m2 representing tasks to achieve result g which are more complex, with more risk for making mistakes therefore it can lead to learning. These mental states are assumed connected as follows (see also Table 3): d_m2 → e_m2 → f_m2 → g_m2 The quality of the knowledge of these mental models is represented by the weights of their connections, where weight value 1 is perfect knowledge. To personalize these mental model states, a name indicating the person is added, for example for mental model m2: d_m2_B → e_m2_B → f_m2_B → g_m2_B for boss B
An Adaptive Network Model
95
Table 2. States for context factors defining a mindset nr
state
explanation
X1
Solid_m1_B
Task m1 is an easy task to obtain result g for boss B
X2
NonSolid_m2_B
Task m2 is a complex task to obtain result g for boss B
X3
LimMindset_B
The mindset of boss B is limited to simple tasks
X4
ExpandMindset_B
The mindset of boss B is expanded for complex tasks
X5
AcceptsMistakes_B
Boss B believes that making mistakes is acceptable
X6
LearningFromMistakes_B
Boss B is willing and able to learn from mistakes
X7
SimpModel_B
Tasks chosen by boss B are simple
X8
IntricateTasks_B
Tasks chosen by boss B are complex
X9
MistakeIdentified_B
Mistakes are recognized
X10
NoMistakes_B
Boss B believes that no mistakes should be made
Table 3. Base level mental model states nr
state
explanation
X11
a_m1_B
Simple task in mental model m1 for boss B, no mistakes and therefore no learning from mistakes
X12
b_m1_B
Simple task b in mental model m1 for boss B, no mistakes and therefore no learning from mistakes
X13
c_m1_B
Simple task c in mental model m1 for boss B, no mistakes and therefore no learning from mistakes
X14
g_m1_B
Task g in mental model m1 for boss B
X15
d_m2_B
Complex task d in mental model m2 for boss B, risk of making mistakes and learning
X15
e_m2_B
Complex task e in mental model m2 for boss B, risk of making mistakes and learning
X17
f_m2_B
Complex task f in mental model m2 for boss B, risk of making mistakes and learning
X18
g_m2_B
Task g in mental model m2 for boss B
d_m2_E → e_m2_E → f_m2_E → g_m2_E for employee E Adaptive Knowledge for the Mental Models As the knowledge for the mental models is assumed to be adaptive, their connection weights are represented by self-model W-states as discussed in Sect. 2, for example, for mental model m2 of the boss B and of an employee E: Wd_m2_B,e_m2_B , We_m2_B,f_m2_B , Wf_m2_B,g_m2_B knowledge of boss B
96
M. Hosseini et al.
Fig. 1. The second-order adaptive network architecture used for boss and for employees
Wd_m2_E,e_m2_E , We_m2_E,f_m2_E , Wf_m2_E,g_m2_E knowledge of employee E. See Table 4 for further explanations of the first- and second-order self-model states. In Fig. 1 the network model is shown in 3D according to three planes. On the base level (pink plane), base states and base connections are depicted. On first-order selfmodel level (blue plane), simple and complex tasks from base level are connected to their W-states. On the second-order self-model level (purple plane) H-states represent speed factors for the adaptation of the knowledge, connected upward and downward to W-states; see the last row of Table 4. The T-states are used to boost the Hebbian learning
An Adaptive Network Model
97
(by temporarely lowered excitability thresholds for the mental model states of m2) after a mistake is identified, see also Table 4. The Contagion of the Boss B’s Mindset to an Employee E’s Mindset As can be seen in Fig. 2 the states for B’s mindset have connections to the corresponding states for E’s mindset. Given that B’s mindset is considered static with values from the beginning, by the indicated connections these values transfer to E’s mindset as well.
4 Simulation Experiments In order to simulate the processes and the learning according to the two alternative mindsets, different variants were made and simulated in Matlab. These variants were specified by role matrices (Treur, 2020b): mb (base connectivity), mcw (connection weights), mcfw (combination function weights), mcfp (combination function parameters), msv (speed factors), and iv (initial values). See the Appendix [11] for them. Two scenarios have been addressed, the first one is a limited mindset for boss B (Fig. 3) and the second is extended mindset (Fig. 4). They have constant values as specified in Table 5. They show what happens when simple and complex tasks are done by the bosses B and their effects on employees E. They also present that when mistakes are identified, and learning happens on the extended mindset while nothing happens on the limited mindset. Table 4. First- and second-order self-model states nr
state
explanation
X19
Wa_m1_B,b_m1_B
Self-model state representing the weight of the connection from task a to task b within mental model m1 of boss B
X20
Wb_m1_B,c_m21_B
Self-model state representing the weight of the connection from task b to task c within mental model m1 of boss B
X21
Wc_m1_B,g_m1_B
Self-model state representing the weight of the connection from task c to task g within mental model m1 of boss B
X22
Wd_m2_B,e_m2_B
Self-model state representing the weight of the connection from task d to task e within mental model m2 of boss B
X23
We_m2_B,f_m2_B
Self-model state representing the weight of the connection from task e to task f within mental model m2 of boss (continued)
98
M. Hosseini et al. Table 4. (continued)
nr
state
explanation
X24
Wf_m2_B,g_m2_B
Self-model state representing the weight of the connection from task f to task g within mental model m2 of boss
X25
Mistake cause Wd_m2_B,e_m2_B
The cause of the mistake is the connection from task d to task e in mental model m2 of boss B
X26
Mistake cause We_m2_B,f_m2_B
The cause of the mistake is the connection from task e to task f in mental model m2 of boss B
X27
Mistake cause Wf_m2_B,g_m2_B
The cause of the mistake is the connection from task f to task g in mental model m2 of boss B
X28
T_context_B
Threshold context state for boss B
X29
Te_m2_B
Self-model state representing the threshold of complex task e in mental model m2 of boss B
X30
Tf_m2_B
Self-model state representing the threshold of complex task f in mental model m2 of boss B
X31
Tg_m2_B
Self-model state representing the threshold of complex task g in mental model m2 of boss B
X32
HWm2_B
Second-order self-model state for the speed factor of change in the W-states for mental model m2 of boss B
Figure 5 indicates T-states for the mental model states in mental model m2 which shows that learning is happening as boosted by lower excitability threshold values (the curves starting at 0.8 and going down to around 0.5) when doing complex tasks and mistakes are identified. Figure 6 shows W- and HW -states which represent learning effects of boss B and employees E. The learning speed is shown via HW -states, learning happens for boss B first (purple line starting from 0) and then for employee E (blue line starting from 0) as the employee takes the effect from the boss; it is shown by the dotted lines. The two lines which start from value 1 on the y-axis indicate the simple tasks which are not used and therefore slowly forgotten.
An Adaptive Network Model
99
Fig. 2. The overall second-order adaptive network model incorporating both the architecture variant for the boss and for an employee and connections between them for the contagion of the mindset from boss to employee
Table 5. Values for the mindset context factors for the two scenarios nr
state
limited mindset
extended mindset
X1
Solid_m1 _B
1
1
X2
NonSolid_m2 _B
1
1
X3
LimMindset_B
1
0
X4
ExpandMindset _B
0
1
X5
AcceptsMistakes _B
0
1
X6
LearningFromMistakes _B
0
1
X7
SimpModel _B
1
0
X8
IntricateTasks _B
0
1
X9
MistakeIdentified _B
-
-
X10
NoMistakes_B
1
0
5 Discussion When leaders make mistakes and those mistakes are considered as a negative incident, it leads to a fixed way of thinking of avoiding mistakes. This is the main cause of not achieving long-term success. However, when leaders have the confidence of identifying their mistakes and learning from them, it effects the learning process exponentially and as a result progress happens [6]. The objective of this paper was to analyze the simulation
100
M. Hosseini et al.
outcomes for the different mindsets in a comparative manner and illustrate the differences via graphs of these simulation experiments.
Solid_m1 _B NonSolid_m2_B LimMindset_B ExpandMindset_B AcceptsMistakes_B LearningFromMistakes_B SimpModel_B IntricateTasks_B MistakeIdentified_B NoMistakes_B a_m1_B b_m1_B c_m1_B g_m1_B d_m2_B e_m2_B f_m2_B g_m2_B Wa_m1_B,b_m1_B Wb_m1_B,c_m21_B Wc_m1_B,g_m1_B Wd_m2_B,e_m2_B We_m2_B,f_m2_B Wf_m2_B,g_m2_B
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 0
20
40
60
80
100
120
Mistake cause Wd_m2_B,e_m2_B Mistake cause We_m2_B,f_m2_B Mistake cause Wf_m2_B,g_m2_B
T_context_B Te_m2_B Tf_m2_B Tg_m2_B HWm2_B
Fig. 3. Simulation for the limited mindset scenario.
For the legend of employee E’s states, see Fig. 4. For the legend of boss B’s states, see Fig. 3. Boss B’s willingness to tackle challenging tasks influences the employees, encouraging them to also take on and learn from such tasks. Conversely, if the boss avoids complex assignments, the entire team might refrain from engaging in such a learning process, leading to a lack of progress. This contagion behavior is shown in the model design depicted in Figs. 1 and 2. Via Matlab simulations, the study explored two contrasting mindsets using role matrices (mb, mcw, mcfw, mcfp, msv, iv that can be found in Appendix [11]). Findings illuminated how bosses’ (B) engagement in simple and complex tasks affected employees (E). Crucially, extended mindsets facilitated learning from mistakes, contrasting with limited mindsets. Figure 5 revealed heightened learning (boosted by lower excitability thresholds triggered by identified mistakes) during complex tasks and mistake recognition (m2). Figure 6 showcased boss-employee learning dynamics (W-states), indicating boss-initiated learning followed by employee influence. Notably, neglected tasks exhibited a slow knowledge decay. This simulation-driven approach unraveled intricate leadership-learning interplays, underscoring the pivotal role of mindsets and task complexity in shaping organizational learning dynamics.
An Adaptive Network Model
101
Solid_m1 _E NonSolid_m2_E LimMindset_E ExpandMindset_E AcceptsMistakes_E LearningFromMistakes_E SimpModel_E IntricateTasks_E MistakeIdentified_E NoMistakes_E a_m1_E b_m1_E c_m1_E g_m1_E d_m2_E e_m2_E f_m2_E g_m2_E Wa_m1_E,b_m1_E Wb_m1_E,c_m21_E Wc_m1_E,g_m1_E Wd_m2_E,e_m2_E We_m2_E,f_m2_E Wf_m2_E,g_m2_E
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 0
20
40
60
80
100
120
Mistake cause Wd_m2_E,e_m2_E Mistake cause We_m2_E,f_m2_E Mistake cause Wf_m2_E,g_m2_E
T_context_E Te_m2_E Tf_m2_E Tg_m2_E HWm2_E
Fig. 4. Simulation for the extended mindset scenario. 1
0.9
0.8
0.7
0.6
0.5
T_context_B B T2 Be_m2_B T T 2 f m Bf_m2_B T T 2 g m Bg_m2_B H H 2 Wm W B m2_B T ontext c T_context_E E T 2 e mTEe_m2_E T 2 Ef_m2_E fm T T 2 g mTEg_m2_E H 2 WH m W E m2_E
T ontext c
T
0.4
0.3
0.2
0.1
0 0
20
40
60
80
100
em
120
Fig. 5. The T-states for the extended mindset scenario: decreasing of excitability thresholds when identifying mistakes for complex tasks
No other computational models of this double bias phenomenon can be found in the literature. Somewhat related are [9] and its extended version [10]. But in that case the learning from mistakes has been covered in a more simplified and more abstracted form, whereas no double bias and no contagion from managers to employees is addressed. For
102
M. Hosseini et al.
the network model introduced in the current paper, still extensions can be incorporated in order to include further effects of the boss who accepts making mistakes and learning from them on employees’ personal productivity on tasks, plus the effect on reaching goals. 1
0.9
0.8
0.7 1 ,b 1 m m a_m1_B,b_m1_B B B W 1 ,c 1 m B m B W b_m1_B,c_m21_B Wc 1 ,g 1 m B m B W c_m1_B,g_m1_B Wd 2 ,e 2 m B m B W d_m2_B,e_m2_B We 2 ,f 2 m B m B W Wf 2 e_m2_B,f_m2_B ,g 2 m B m B Wa W 1f_m2_B,g_m2_B ,b 1 m E m E Wb W 1a_m1_E,b_m1_E ,c 1 m E m E Wc W 1 ,g 1 m b_m1_E,c_m21_E m E E Wd W 2 ,e 2 m c_m1_E,g_m1_E m E E We W 2 ,f 2 m d_m2_E,e_m2_E E m E Wf 2 ,g 2 WEe_m2_E,f_m2_E m m E 120 Wf_m2_E,g_m2_E Wa 0.6
Wb 0.5
0.4
0.3
0.2
0.1
0 0
20
40
60
80
100
Fig. 6. The W-states for the extended mindset scenario: Learning
The self-modeling network modeling approach used is a generic adaptive dynamical system modeling approach as it has been shown that any smooth adaptive dynamical system can be represented in a canonical manner as a self-modeling network model [12, 13]. Therefore, choosing it does not introduce limitations.
6 Conclusion In conclusion, mistakes hold a dual role as both learning opportunities and sources of negativity. This influences how we perceive them and make decisions, known as the framing effect. This outlook affects bosses and by contagion also their employees, creating a loop called the double bias effect. This plays a big role in organizations, the theory of transformative learning emphasizes learning from personal experiences, but negativity can block this. Ignoring mistakes means missing chances to learn and grow. The double bias limits growth mindsets and transformational leadership. It’s a roadblock to shared learning in organizations and affects how well a company can adapt to change. Fixing this requires understanding and changing this bias, promoting mindsets that prioritize learning and adaptability. This study used computational analysis by an adaptive network model to show how important it is to allow mistakes and learn from them. By addressing the double bias, organizations can build a culture of continuous learning and flexible leadership [8].
An Adaptive Network Model
103
References 1. Cannon, D.M., Edmondson, C.A.: Failing to learn and learning to fail (intelligently): how great organizations put failure to work to innovate and improve. Long Range Plan. 38(3), 299–319 (2004) 2. Mishra, A.: Learning from mistakes. Journal of Motilal Rastogi School of Management 5(2), 22–31. https://www.researchgate.net/publication/323695816 (2012) 3. Treur, J.: Modeling higher-order adaptivity of a network by multilevel network reification. Netw. Sci. 8, 110–144 (2020) 4. Treur, J.: Network-Oriented Modeling for Adaptive Networks: Designing Higher-Order Adaptive Biological, Mental and Social Network Models. Springer Nature (2020b). https://doi.org/ 10.1007/978-3-030-31445-3 5. Treur, J.: Modeling multi-order adaptive processes by self-modeling networks (keynote speech). In: Antonio J. Tallón-Ballesteros and Chi-Hua Chen (eds.), Proc. of the 2nd International Conference on Machine Learning and Intelligent Systems, MLIS’20. Frontiers in Artificial Intelligence and Applications, 332, pp. 206–217. IOS Press (2020c) 6. Treur, J., Van Ments, L, (eds.): Mental Models and their Dynamics, Adaptation, and Control: a Self-Modeling Network Modeling Approach. Springer Nature, Cham (2022) 7. Kucharska, W., Bedford, D.A.D., Kopytko, A.: The double cognitive bias of mistakes: a measurement method. Europ. Conf. Res. Methodol. Bus. Manage. Stud. 22(1), 103–112 (2023). https://doi.org/10.34190/ecrm.22.1.1372 8. Shallenberger, D.: Learning from our mistakes: international educators reflect. Front.: Interdiscip. J. Study Abroad 26(1), 248–263 (2015) 9. Samhan, N., Treur, J., Kucharska, W., Wiewiora, A.: An adaptive network model simulating the effects of different culture types and leader qualities on mistake handling and organisational learning. Complex Networks and Their Applications XI. Proc. of the 11th International Conference on Complex Networks and their Applications.Publisher: Studies in Computational Intelligence, vol 1077, pp. 224–238. Springer Nature (2022). https://doi.org/10.1007/ 978-3-031-21127-0_19 10. Samhan, N., Treur, J., Kucharska, W., Wiewiora, A.: Computational simulation of the effects of different culture types and leader qualities on mistake handling and organisational learning. In: Canbalo˘glu, G., Treur, J., Wiewiora, A. (eds.), Computational Modeling of Multilevel Organisational Learning and Its Control Using Self-modeling Network Models, Ch 14, pp. 363–408. Springer Nature (2023). https://doi.org/10.1007/978-3-031-28735-0_14 11. Appendix of An Adaptive Network Model for a Double Bias Perspective on Learning from Mistakes within Organizations. See Linked Data at https://www.researchgate.net/publication/ 373554900 (2023) 12. Treur, J.: On the dynamics and adaptivity of mental processes: relating adaptive dynamical systems and self-modeling network models by mathematical analysis. Cogn. Syst. Res. 70, 93–100 (2021) 13. Hendrikse, S., Treur, J., Koole, S.: Modeling emerging interpersonal synchrony and its related adaptive short-term affiliation and long-term bonding: a second-order multi-adaptive neural agent model. Int. J. Neural Syst. 33(07) (2023). https://doi.org/10.1142/S0129065723500387
Identification of Writing Preferences in Wikipedia Jean-Baptiste Chaudron1(B) , Jean-Philippe Magu´e2,3 , and Denis Vigier1 1 2
Universit´e Lumi`ere Lyon2, Laboratoire ICAR, 69676 BRON Cedex, Lyon, France [email protected] ´ Ecole Normale Sup´erieure Lyon, Laboratoire ICAR, 69342 LYON cedex 07, Lyon, France 3 IXXI, Complex Systems Institute, Lyon, France
Abstract. In this paper, we investigate whether there is a standardized writing composition for articles in Wikipedia and, if so, what it entails. By employing a Neural Gas approximation to the topology of our dataset, we generate a graph that represents various prevalent textual compositions adopted by the texts in our dataset. Subsequently, we examine significantly attractive regions within our graph by tracking the evolution of articles over time. Our observations reveal the coexistence of different stable compositions and the emergence and disappearance of certain unstable compositions over time. Keywords: Textual genre · Dynamic graph Computational linguistics · Wikipedia
1 1.1
· Temporal graph ·
Introduction Writing Preferences in Wikipedia
In Wikipedia, writers iteratively modify texts, starting from a stub and progressing towards a fully formed article. The motivations driving these modifications can be manifold, resulting in a broad spectrum of textual variations. This can range from simple typo corrections to the complete deletion of a paragraph or even the entire article. It includes the addition of new sections, relocating sections to other articles, and more. Despite being composed of diverse materials and written by various authors over several years, covering a wide array of subjects, articles maintain a well-structured format. They are now considered reliable sources of information. Even though some guidelines for correct writing exist, authors are free to deviate from them, and in some ways, this deviation might even be encouraged, We thank the LABEX ASLAN (ANR-10-LABX-0081) of Universit´e de Lyon for its financial support within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) of the French government operated by the National Research Agency (ANR). c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 104–115, 2024. https://doi.org/10.1007/978-3-031-53503-1_9
Writing Preference Identifications
105
as stated in Wikipedia’s fifth pillar: ‘Wikipedia has no firm rules’ [17]. Research has also demonstrated that there is a degree of homogeneity in Wikipedia’s writing, both in English when compared to other encyclopedias [3], and in languages such as Japanese [13]. We’ve also observed this trend in our own French corpus (which might result in a future publication). Consequently, it’s worth exploring the mechanisms that govern the creation and composition of articles, especially given the fact that the process is purely auto-organisational. One hypothesis, explaining the surprising homogeneity of the articles, could be that, even though they are unspoken, there is rules or forces, acting upon the writing process, and driving it. Alternatively, one could consider that the introduction of purely random variations in terms of writing composition would lead to a series of articles with similar structures. We will postulate that these forces take the form of an ideal version, in the mind of the writers, of the shape an article should have. These forms could either be shared among all members of the writing community or could represent a compromise between the ideals of different communities of writers. To delve deeper into this question, we will introduce some concepts from textual linguistics, specifically from textual genre theory. This theory focuses on the study of textual forms and how they are categorized into types of texts that share similar forms. 1.2
Genre and Prototype Theory
Multiple definitions of textual genres have been proposed, each stemming from different perspectives applied to this subject. While it is widely accepted that genres are largely dependent on the social context and the communicative purpose of the production [1], the textual composition and the presence or absence of specific features are also major areas of research in the study of textual genres, especially in computational approaches [10]. These features are predominantly considered to be formal in nature, whether about the layout of the texts [2,12] or about more textual features on which we will discuss longer. For the later, features are often more linguistically oriented [8,15], particularly those related to what we will refer to as textual composition. These features describe a text in terms of part of speech (POS ), such as the frequency of nouns, verbs, etc., syntactical dependencies (e.g., frequency of subordinates, conjunctions, etc.), verb morphology (e.g., frequency of past tense, future tense, passive verbs, etc.), lexical diversity, mean word length, and so on. The literature concerning these features is somewhat mixed because there is a consensus that there isn’t a strong theoretical basis justifying their direct relation to genres, but rather to register and style [1]. Nevertheless, despite these reservations, empirical literature has found great success in utilizing them [6–8,11,15]. While some features appear to be more relevant than others for distinguishing texts based on their genre, the question of what constitutes a genre remains open. One approach to defining genre is through the lens of Prototype theory [18]. This theory posits that genres are classes of texts that bear resemblance to an idealized version of themselves, known as their Prototype.
106
J.-B. Chaudron et al.
For instance, envisioning the Prototype of a genre like Poetry would involve specific features such as the presence of verses and rhyming patterns. However, unlike a precise and fixed definition of what constitutes Poetry, the Prototype theory offers an idealized version from which actual poems can deviate. Consequently, works like Rimbaud’s A Season in Hell, which lacks traditional verse and rhymes, can still exhibit poetic qualities that place it within the category of Poetry. Because the Prototype theory doesn’t necessitate a rigid set of features or strict boundaries, it possesses the strength to elucidate the process of fuzzy categorization. 1.3
Prototype and Writing Preferences
The Prototype of a genre possesses two crucial features. Firstly, it acts as a sort of center of mass for its genre, representing the average or typical instance of that genre. For instance, in the context of Poetry, the Prototype would resemble an average poem. Secondly, it’s believed that the Prototype exerts a centripetal force, influencing texts to align with its characteristics. If we aim to operationalize this concept to model the modification of texts, influencing the generation process of articles in Wikipedia, whether random or directed, we can consider the presence or absence of a Prototype. In the case of directed modifications, taking into account the center of mass property, we would anticipate a higher density of texts clustered around a specific point in the space of the Prototype. This suggests that these texts share a similar textual composition with the Prototype. In terms of the centripetal features of the Prototype, the outcome would differ, as each modification would gradually steer texts towards the attractive point, which is the Prototype. Linguistically, this can be interpreted as texts converging gradually, in terms of linguistic composition, towards the ideal version. Conversely, in a context of pure randomness, we would anticipate either the absence of a higher density point or the absence of regions with greater attraction in our phase space. It’s worth noting that in the scenario of a single Prototype acting as the attracting force, we expect the scaling of the features performed during modifications could tend to diminish the differences between texts, as they already share the same Prototype. Our research question aimed to explore how writing forms within Wikipedia can exhibit homogeneity and harmony across the project despite collaboration among diverse entities with differing goals. We suggested that if there’s a shared structure, there should be a driving force behind modifications; otherwise, textual variations might be better modelled as a random process. By introducing the concept of the Prototype theory from genre theory, we put forth a strong contender for describing the type of force at play in this context. To further clarify, let’s restate our hypotheses. Firstly, we propose that if genres are organized around Prototypes, we should observe either multiple points of higher density in the phase space or regions with significant attractive influence.
Writing Preference Identifications
107
Secondly, if Wikipedia exhibits homogeneity, there should be just one such Prototypical form. Last, if no specific forces drive the process, we should not be able to observe neither specifically high density point, nor highly attractive textual forms.
2
Method
2.1
Dataset
Our corpus consists of 4800 articles sourced from the French Wikipedia, obtained in March 2022 using the PyWikibot Python library. During extraction, these articles were considered as either “good” or “featured”, indicating that the community deemed them well-structured and largely comprehensive in terms of content. For each of these articles, we extracted all available versions, encompassing their entire revision history from the initial version added to Wikipedia up to the present. Each version of an article is linked to its corresponding revision timestamp and the author responsible for the changes. This compilation resulted in a dataset of over 2 million texts, representing every iteration of each article over time. The extraction, processing and transformation of the texts and the subsequent analysis were run on the computers of the Centre Blaise Pascal of the ENS of Lyon [14]. 2.2
Preprocessing of the Dataset
To conduct our desired analysis, we needed to transform our texts into vectors. While common methods like text embedding (e.g., using techniques like Doc2Vec) could have been employed, our objective was to create vectors that would enable us to differentiate texts based on their genre-related features. As Table 1. Linguistic Features for Textual Analysis Feature
Description
Part of Speech (POS)
Frequency of different parts of speech, such as nouns, verbs, adjectives, adverbs, etc.
Syntactical Dependencies (DEP) Frequency of syntactical constructs like subordinates, conjunctions, and other dependencies Verb Morphology
Frequency of different verb forms, such as past tense, present tense, passive voice, etc.
Lexical Diversity
Measurement of vocabulary richness, including the number of unique words used
Other morphological properties
Average length of words in the text, number of sentences, number of tokens etc.
Wikipedia’s features
Frequency of Tag, Text, Templates, Images etc.
108
J.-B. Chaudron et al.
a result, we extracted a set of features that are typically associated with genre characteristics, as described in Table 1. We draw attention to the fact that we’ve added Wikipedia-specific features. These features are Wikimarkup elements specific to the platform. They allow users to add metatextual information or structure the text. Examples include links between articles or to other pages, images, citations, dates, and so on. They are relevant in the context of genre as they provide information about the formal structure of the text, thus helping to better isolate features specific to the text’s form Except for Wikipedia’s specific features, the features were extracted using SpaCy’s proposed model of tagging [4]. In this case, we used the model named ‘fr core news lg,’ i.e., a generalized purpose model for French, trained on a news dataset. Subsequently, after extracting these features from the texts, we calculated their averages, taking into account the number of tokens present in each text. This process allowed us to create vectors representing each text’s genrerelated features, which we then used for further analysis. We were left with slightly over 2 million vectors, each representing a text, with over 120 dimensions. These vectors underwent normalization, resulting in features with a mean of 0 and a standard deviation of 1. Subsequently, we employed Truncated Singular Value Decomposition (tSVD) to reduce the dimensions from 120 to 75. This selection was based on the explained variance ratio score, where we aimed for the minimal number of dimensions that accounted for more than 99% of the dataset’s variation. The motivation for such transformation is purely to reduce the correlation between features, which might bias the intended analysis. Another concern might be the potential loss of interpretability of the new features; we address this in Sect. 2.5 regarding the analysis of the prototypes. The outcome was vectors representing the genre-related features of all versions of our texts. These vectors exhibited no correlation between features (the maximal correlation between two dimensions being 1e−14 and had undergone scaling. This set the stage for us to conduct distance analyses, exploring relationships and patterns within the data. 2.3
Prototype Identification
Topology Extraction Through Neural Gas: A major challenge with our approach is that identifying attractive regions solely from the data isn’t straightforward, particularly in the presence of noise. Normally, such a process is executed with knowledge of the vector field associated with the system, rather than in a data-driven manner. To address this issue, one approach is to discretize the phase space to work with a simpler representation of the data. In our case, we’ve discretized our dataset into a graph where each region is represented by a node connected to its neighboring regions. To perform such discretization we’ve used a Neural Gas (NG) [9] enabling this type of reduction. In the fast growing research in machine learning and topological data analysis, a large number of newer and alternative
Writing Preference Identifications
109
approach exists, however, we’ve considered that this one is robust, efficient and fast, making it a pragmatic choice for our purpose. Through the NG discretization, the Prototypes becomes regions of the phase space, represented by a specific textual composition. It achieves this by symbolizing regions through their centers, as the nodes in the NG-generated graph are linked to the central vector of the region they model, the result of this process is shown in Fig. 1(a). In this article, we’ve used a 100 node graph to represent the distribution of our data, resulting from the NG learning of the topology. The nodes were related to regions with high density of texts and links to the adjacency of the nodes i.e. the presence of a more or less high density of data points in-between the two nodes.
Fig. 1. Representation of our dataset as a graph. (a) 2D projection of our graph, where nodes represent high density regions and edges represent the existence of more or less high density of point in-between the nodes. Colors represent the log of the number of articles associated with a node. (b) Visualisation of the attractor on the graph (in red), over the whole graph without its edges (in pale blue), 5 disconnected components can be seen.
Attraction Metric: Furthermore, we can now interpret the process of attraction as a region’s capacity, over time, to accumulate new articles while preventing them from leaving that region. This phenomenon can be understood within a region as an income rate of texts exceeding the rate at which texts depart. We can characterize a region’s behavior as follows: – An attractive region (or sink) is one where the income rate is greater than the outcome rate. – A transitional region exhibits an equal rate of income and outcome. – A source region experiences a higher outcome rate than income rate. This metric, when applied to the discrete topology of a graph, is akin to the concept of “divergence” in dynamical systems and vector analysis. Therefore, even though we use terms like attractive, transitional, and source nodes in this
110
J.-B. Chaudron et al.
paper, readers familiar with the field may recognize the terminology of sink and source nodes. This interpretation provides a framework for understanding the dynamics of regions within the graph and how they accumulate and retain texts over time. Statistical Significance: The final question to address pertains to determining when the outcome surpasses the income (or vice versa) in significance. For instance, if a node witnesses 200 texts departing while 210 have entered, can we categorize it as an attractive node? To address this, we employ a straightforward Binomial test. This test revolves around a random distribution of binary outcomes akin to a coin flip. In our scenario, we consider this random distribution to mimic a fair coin toss, implying that an equivalent amount of income and outcome is expected (akin to heads and tails on a coin). Consequently, we adopt a binomial distribution with a probability of success set at 0.5. Subsequently, we gauge the probability that this process, repeated N times, would yield M or fewer observed successes. In the case of 210 outcomes and 200 income, the likelihood of such a process producing over 210 outcomes among 410 events is 0.33. Consequently, we would not categorize such a region as a definitive source; rather, it seems to be more of a transitional region. In our analysis, we choose a significance level of p < 0.01 to consider significant the process. Additionally, we apply a Bonferroni correction to the p-value due to the calculation being performed for 50 nodes. This correction brings down the threshold to 0.01/50, or p < 2e10−4 . This approach ensures a methodical determination of whether a region’s outcome surpasses its income, shedding light on the regions’ behavior and significance within the system.
Fig. 2. Time serie of the attractiveness of nodes in the topological graph, ordered by the most active node to the less active. The score indicate the ratio of texts attracted to a node over the number of texts moving into or out of the node, 1 meaning that texts only entered the node region and 0 that texts only went out of this region. The hached part signal that these exchanges were not significant.
Writing Preference Identifications
2.4
111
The Whole Procedure
Putting everything together, we want: 1. Infer the Topology of the data, using an NG 2. Compute the number of texts entering each node and the number of texts leaving each node, for a given time span 3. Compute the p-value that each node is either a sink or a source during this time span If there is only one prototype, we expect to have only one region with the attractiveness property, or a set of connected regions. If there are several prototypes, we would expect to observe several unconnected regions that appear as attractive at different dates. If there is no prototypical region, we would expect to see no attracting nodes. 2.5
Prototype Analysis
If attractive regions are to be identified, two situations can arise. Firstly, attractors may be disconnected in the graph, indicating that they do not account for adjacent regions or, in other words, they do not exhibit a similar textual composition. In this situation, we would consider them as distinct attractive forms of texts. Conversely, attractive regions could be adjacent, implying that the corpora they represent are spread over these regions. In this case, we would consider them to be one attractor. In either case, we can perform an analysis of the corpora associated with these attractors. Since the NG-graph is produced after dimensionality reduction, interpreting the dimensions might be challenging. To address this, we propose using the Relief algorithm [5], which allows us to contrast features of texts from different clusters to identify those most specific to each cluster. This way, we can understand the procedure as a two-step approach: first using topological approximation of the data to identify prototypical regions, and then analyzing the texts inside these regions to understand what specifically ties them together. However, as the purpose of this paper is not to conduct an extensive linguistic analysis of these attractors, we will limit ourselves to the first three features and provide a broad overview of their meaning. 2.6
Implementation Details
We list here some implementation details. As we’ve stated, we performed a tSVD on the entire dataset, reducing the dimensions to 75, with the goal of retaining 99% of the explained variance. Regarding the Neural Gas (NG) implementation, we followed the details outlined in the paper [9] and utilized the following parameters: Firstly, we trained it with 50 nodes over 50,000 iterations (i.e., the number of samples seen). Secondly, we set the values of the parameters λ, η, and the age threshold for edge
112
J.-B. Chaudron et al.
removal to 10, 0.5, and 15 as initial values, and 2, 0.05, and 50 as final values, respectively. The value of each parameter evolves according to the function p (n/tmax ) , where pi and pf are the initial and final values of the parameter, pi pfi n is the iteration number, and tmax is the total number of iterations, which in this case is 50,000. For computing the attraction scores, we filtered our dataset into six-month periods, starting from January-June 2004 to July-December 2022. We also considered the latest version, if available, of texts before these periods and incorporated them into the filtered dataset. Subsequently, we identified the region to which they belong by finding the node center to which they are closest individually. Following this, we examined, article by article, the node transitions they underwent, if any. Finally, we calculated, node by node, how often an article transitioned into and out of each node to compute the p-value for the node’s attractiveness. At last, we’ve also made the choice to perform what we’ll call a trajectory smoothing. Given that modifications in Wikipedia can vary greatly, some being substantial while others minimal, the evolution of our texts aren’t continuous and could be likened to the concept of jumping into hyperspace as seen in science fiction where a spaceship vanishes from one point only to reappear at another. If we assume that only Prototype forces guide the evolution of texts, then we can posit that the trajectories of similar texts, understood as the time series of the vectors we’ve created, are similar, even though the degree of variation might differ. Hence, to facilitate smoothing and enable more meaningful comparisons between evolutions of texts, we’ve adjusted the clusters’ transitions to represent the shortest path between the two clusters. If the two nodes are already linked, the textual variation is presented in the same manner, as a transition between the same two nodes. If not, every node that forms the shortest path between the initial two nodes is included in the transitions. This inclusion doesn’t substantially alter the result, as the difference between input and output remains the same, but it may serve to reduce the significance of this difference. Indeed, without this procedure, nodes reached by a small fraction of texts can be considered as significantly attractive, when the smoothing allow to make them appear more as a transitive regions.
3
Results
The initial research question we aimed to address concerned the possible existence of attractors within Wikipedia. In Fig. 2, we present the outcomes of the analysis concerning the nodes’ attractiveness within the graph. As is evident, multiple nodes exhibit attracting characteristics, indicating potential Prototypes. Another interesting aspect of these results is the dynamics of these attractors, which appear at different times during the process. For instance, we can see node 43 starting to be significantly attractive only in 2009, or node 0 roughly around 2014. Also, node 0 starts as a source and then becomes a sink, indicating
Writing Preference Identifications
113
a curious dynamic. We can also observe that the first two regions (49 and 48) are almost continuously attractive, while node 24 is intermittently attractive. Another aspect we considered was the presence of multiple distinct attractors. Although Fig. 2 already displays the existence of several attractors, we postulated that if two attractive regions were connected, they could potentially be treated as a single attractor. The connectivity of the considered regions is showcased in Fig. 1(b). Notably, three disconnected areas arise, allowing us to treat them as distinctive attractive regions as a whole. Furthermore, the Relief analysis, presented in Table 2, allowed us to better understand which features were associated with each Prototype. This table shows, for each group of attractive nodes, the most specific features of the texts associated with these nodes, in contrast with the other texts not belonging to these nodes. Without delving too much into the details, we can already observe a few interesting trends. For instance, the first group related to node 0, which was both a source and a sink, seems to have a strong relationship with sentence construction. Indeed, the markers and open clausal complement indicate to us that this group is specific in its utilization of subordinates, potentially resulting in long and complex sentences. On the contrary, the group 24 seems to be specific only by its frequency of Wikipedia-specific features, such as wikilinks or the presence or absence of the Featured article template. Lastly, we can observe that the group 48, 49, 43 is specific in its use of modifiers of nominals, indicating a distinct sentence structure, potentially simpler than the one in the group 0. Table 2. Linguistic Features for Textual Analysis Nodes
Feature 1
Feature 2
Feature 3
0 DEP : Marker POS : Pronoun DEP : open clausal complement Template : Sister project links 48, 49, 43 DEP : Modifier of nominal POS : Numeral # Wikilink Template : Featured article # Text 24
4
Discussion
From these results, we draw the following conclusions. First, there are standard ways of writing in Wikipedia, and in fact, there appear to be several. Second, some of these writing styles have remained quite consistent throughout the encyclopedia’s history, while others are more sporadic, appearing and disappearing within six-month or one-year spans. Third, based on our observations, it seems that these specific ways of writing strongly revolve around Wikipedia’s specific markups or syntactical constructions. However, our present investigations are
114
J.-B. Chaudron et al.
not sufficient to clearly determine the linguistic and functional interpretation of them. Through these inquiries, we sought to understand if Wikipedia indeed comprises articles with similar genres and whether various types of texts exist within it. We believe we’ve demonstrated that there are indeed preferred ways of composing text, associated with genres, but that multiple coexisting styles are present. Furthermore, we’ve highlighted that these writing preferences can emerge or fade over time, with some remaining stable. Lastly, we’ve gained a deeper understanding of the preferred writing composition of authors, enabling a more thorough exploration of writer preferences. While we believe these results provide valuable insights into the writing process of Wikipedia contributors, we also acknowledge that our dataset’s limitation to featured and good articles might lead to the identification of specific standard Wikipedia articles. It’s possible that other preferred writing styles exist but aren’t represented in these more standardized articles. Moreover, we recognize that the linguistic composition of the text doesn’t entirely uncover the complexities that determine textual genre. Additional structural features should be explored to gain a deeper understanding of writers’ preference dynamics regarding textual structure. Finally, we’ve annotated our data using available algorithms from spaCy, which are known to make mistakes, especially on potentially noisy datasets like ours. Older versions of Wikipedia texts are often in formats that introduce some noise into the data. These errors could have impacted the annotation process and subsequently, the text vectorization, potentially leading to analysis errors. Considering these aspects, we believe future directions for this work could involve removing noisy features through qualitative analysis, expanding the corpus to include more articles, and incorporating more structural features-either from the discourse structure or page layout. Such enhancements could enrich textual linguistic and psychological studies. We contend that these results directly unveil writers’ preferences.
References 1. Biber, D., Conrad, S.: Register, Genre, and Style. Cambridge University Press (2019) 2. Chen, N., Blostein, D.: A survey of document image classification: problem statement, classifier architecture and performance evaluation. Int. J. Doc. Anal. Recogn. 10, 1–16 (2007). https://doi.org/10.1007/s10032-006-0020-2 3. Emigh, W., Herring, S.C: Collaborative authoring on the web a genre analysis of online encyclopedias. In: Proceedings of the Annual Hawaii International Conference on System Sciences 5, pp. 99 (2005). https://doi.org/10.1109/hicss.2005. 149 4. Honnibal, M., Montani, I.: spaCy 2: natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear 7(1), 411–420 (2017)
Writing Preference Identifications
115
5. Kenji, K., Larry A. Rendell, L.A.: A practical approach to feature selection. Machine learning proceedings 1992. Morgan Kaufmann, pp. 249–256 (1992). https://doi.org/10.1016/B978-1-55860-247-2.50037-1 6. Lagutina, K.V., Lagutina, N.S., Boychuk, E.I.: Text classification by genres based on rhythmic characteristics. Autom. Contr. Comput. Scie. 56, 735–743 (2022). https://doi.org/10.3103/S0146411622070136 7. Lee, Y.B., Myaeng, S.H.: Text genre classification with genre-revealing and subjectrevealing features. In: Proceedings of the 25th Annual International ACM SIGIR conference on Research and development in information retrieval, pp. 145–150. (2002). https://doi.org/10.1145/564376.564403 8. Lieungnapar, A., Todd, R.W., Trakulkasemsuk, W.: Genre induction from a linguistic approach. Indonesian J. Appl. Linguist. 6, 319–329. (2017). https://doi. org/10.17509/ijal.v6i2.4917 9. Martinetz, T., Schulten, K.: A” neural-gas” network learns topologies (1991) 10. Miro´ nczuk, M. M., Protasiewicz, J.: A recent overview of the state-of-the-art elements of text classification. Expert Syst. Appl. 106, 36–54 (2018). https://doi.org/ 10.1016/j.eswa.2018.03.058 11. Santini, M.: A shallow approach to syntactic feature extraction for genre classification. In: Proceedings of the 7th Annual Colloquium for the UK Special Interest Group for Computational Linguistics, pp. 6–7. Birmingham, UK (2004) 12. Shin, C., Doermann, D., Rosenfeld, A.: Classification of document pages using structure-based features. Int. J. Doc. Anal. Recogn. 3, 232–247 (2001). https:// doi.org/10.1007/PL00013566 13. Skevik, K.A.: Language homogeneity in the Japanese wikipedia. In: Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation, pp. 527–534. (2010) 14. Quemener, E., Corvellec, M.: SIDUS–the solution for extreme deduplication of an operating system. Linux J. 2013(235), 3 (2013). Article no. 3 15. Vicente, M., Maestre, M.M., Lloret, E., Cueto, A.S.: Leveraging machine learning to explain the nature of written genre. IEEE Access 9, 24705–24726. (2021). https://doi.org/10.1109/ACCESS.2021.3056927 16. Wan, M., Fang, A. C., Huang, C. R.: The discriminativeness of internal syntactic representations in automatic genre classification. J. Quant. Linguist. 28, 138–171 (2021). https://doi.org/10.1080/09296174.2019.1663655 17. Wikipedia: Five pillars. https://en.wikipedia.org/wiki/Wikipedia:Five.pillars 18. Wolowski, W.: La s´emantique du prototype et les genres (litt´eraires). Studia Romanica Posnaniensia 33, 65–83. (2006). https://doi.org/10.14746/strop.2006. 33.005
Influence of Virtual Tipping and Collection Rate in Social Live Streaming Services Shintaro Ueki1(B) , Fujio Toriumi2 , and Toshiharu Sugawara1 1
Department of Computer Science and Communications Engineering, Waseda University, Tokyo 169-8555, Japan [email protected], [email protected] 2 Department of Systems Innovation, The University of Tokyo, Tokyo 113-8654, Japan [email protected]
Abstract. Social live streaming services (SLSS) and other consumergenerated media (CGM) offer gamification to attract people. Virtual gifts/tips such as Twitch’s “bits” are examples of this and construct interactive relationships between live streamers and viewers. However, their impact on user behavior and how the collection rates from the platforms to collect a portion of the tips affect user behavior and the platform’s gained rewards have not been sufficiently analyzed. This study focuses on the fact that CGM including SLSS, is a type of public goods game, and we propose a model that considers tipping systems with collection by platforms. Using our agent-based simulated environment, we demonstrate that the effect of tipping on user behavior depends considerably on the preference for psychological or monetary rewards and on network positions, such as the number of degrees. We also show that appropriate collection rates maximize the platform’s rewards. We believe that our results contribute to the design and operation of SLSS platforms. Keywords: Social live streaming services · Agent-based simulation Virtual tip · Monetary reward · Game-theoretic model
1
·
Introduction
Social live streaming services (SLSS) such as Twitch, YouTube Live, Taobao, and YouNow have established another type of media that is different from text- and photo/image-based social media and is now used by many people worldwide. They have become more closely related to people’s daily and real-time lives. Unlike conventional passive media such as television and newspapers, they are interactive and provide virtual third places by automatically forming several communities. We enjoy participating in SLSS because we seek (virtual) social communication to receive psychological rewards, i.e., to satisfy our need for c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 116–128, 2024. https://doi.org/10.1007/978-3-031-53503-1_10
Influence of Virtual Tipping and Collection Rate
117
community belonging, affirmation from agreements with people sharing the same values, entertainment, and information exploration [4]. An SLSS is a type of consumer-generated media (CGM), such as social networking services (SNS), video-sharing services, and review sites, in which users voluntarily submit their own content. CGM are supported by psychological rewards from their posts and offer some types of gamification that allow users to receive psychological rewards through engagement behaviors such as (meta-)comments, clicking “Like!” buttons, and retweets. SLSS also have similar mechanisms, such as posting chats and stamps and providing psychological rewards through perceived shared experiences [2]. Moreover, some CGM, such as YouTube, encourage voluntary participation through advertising revenue or partnerships that are usually provided by platforms. Such monetary incentives can be effective if offered at the right time and under appropriate conditions [14]. Some SLSS and SNS offer another distinctive form of gamification, virtual tipping, which allows users to exchange momentary rewards [9]. Virtual tipping is, for example, “Bits” on Twitch, “SuperChat/SuperStickers/SuperThanks” on YouTube Live, and “Stars” on Facebook. This allows users to directly support creative posters/streamers and highlight their comments to connect with them. Streamers can also earn monetary rewards. The tipping system also allows the platform to collect tips from users at a certain rate to generate income. However, despite the widespread use of virtual tipping, there has been insufficient analysis of the impact of the tipping system and the platform’s collection rate on streamers’ and viewers’ behaviors. Thus, we discuss these from the viewpoint of a game-theoretical model. A theoretical study of psychological rewards on CGM was proposed by Toriumi et al. [11] as a dual structure of Axelrod’s public goods game [1] and identified the balance of cost and (psychological) rewards to maintain voluntary users’ behavior. Hirahara et al. [5] extended this model based on the characteristics of actual SNS and found the importance of gamification elements, such as the“Like!” button and read marks, which provided small psychological rewards. Furthermore, Usui et al. [13,14] added monetary reward schemes to an SNS model and discussed the effectiveness of monetary rewards on users’ behavioral strategies. However, the monetary rewards they considered were advertising revenues or contracts that the platform unilaterally offered to users, and they did not consider monetary exchanges between users, such as virtual tips. Therefore, we propose a model with a tipping system and analyze its effects on the users’ behaviors. We also investigated how the collection rates of the tips affected their behaviors. We believe that our results will be helpful in the design of SLSS and CGM for operations while gaining income.
2
Related Work
Various studies have targeted SLSS and CGM. Lim et al. [7] demonstrated, using structural equation modeling, that the motivations for repeated viewings in SLSS include the desire to assimilate into a community and psychological
118
S. Ueki et al.
immersion into chats. Furthermore, Sj¨ oblom and Hamari [10] analyzed the data on streaming in Twitch and found that the motivation to watch live games played by others were social integrative motivations in the sense that users can achieve deep involvement in the community as well as information gathering, emotional experience, and distraction. Some studies focused on the effects of virtual gifts and tips. For example, Xu et al. [16] showed that the reason for giving a virtual tip is appreciation for enjoying a streamer’s performance from an individual perspective. Similarly, Lee et al. [6] argued that virtual tipping is intended to attract the attention of crowds, and let streamers and their communities know the desired topic of the tip provider. However, these studies involve empirical and statistical surveys of specific platforms. These are not theoretical arguments common to all platforms, such as ours. Several model-based studies on SLSS and CGM have also been conducted. For example, Chalakudei et al. [3] used an evolutionary game to explain the effects of individual preferences on streaming changes and environments. Some studies have investigated the impact of psychological and monetary rewards on CGM using evolutionary games. Toriumi, Yamamoto, and Okada [11] discovered that CGM has the features of a public goods game [1] and proposed the metareward game that has a dual structure of Axelrod’s meta-norms game. They demonstrated that meta-comments played a crucial role in motivating voluntary participation. Subsequently, the meta-reward game was modified into an SNSnorms game [5] to align more closely with the nature of CGM. Usui et al. [14] extended this model to investigate the impact of monetary incentives offered by CGM platforms and identified common dominant behaviors. However, this model does not consider virtual tips between users, even though they are monetary rewards.
3 3.1
Proposed Model and Methodology Overview
The SNS-norms game with monetary rewards and quality (SNS-NG/MQ) [14] is a game-theoretical model based on the SNS-norms game (SNS-NG) [5]. SNS-NG represents the only psychological reward on SNS when reading an item, receiving a comment on an item, or receiving a comment-return, i.e., a meta-comment. The actions that induce these rewards are determined by the users’ willingness to post and comment, whose appropriate rates are learned by the genetic algorithm (GA). SNS-NG/MQ further considers the quality of an item and the monetary reward schemes offered by the platform unilaterally. Note that an item refers to an object on SNS including articles, photos, and videos. User behavior for providing psychological rewards to others naturally incurs some costs; however, users can gain psychological rewards through interactions and monetary rewards if the platform offers. A user is motivated by utility, which is the weighted sum of the psychological and monetary rewards minus the associated cost.
Influence of Virtual Tipping and Collection Rate
119
Our proposed method, SNS-norms game with tip and quality (SNS-NG/TQ), is another extension of SNS-NG for SLSS to represent virtual tipping and understand the impact of user behavior, including aggressiveness in live streaming and attitudes toward quality. In SLSS, an item post corresponds to live streaming of a few seconds to a few minutes, while, like SNS, a comment is a text message posted by a viewer during streaming, and a meta-comment refers to a response to the comment from the streamer. Then, we modify the monetary rewards not from the platform unilaterally but from other users interactively after they are collected at a certain collection rate λ (0 ≤ λ ≤ 1.0) by the platform. We added the parameter tipping interest to indicate the degree of each agent’s interest in tipping (i.e., providing tips). Here, we consider the virtual tip that has already been implemented, such as Twitch’s “bits” and YouTube live’s ”SuperChat,” wherein a viewer can offer only a tip or optionally attach a tip to a text message (comment). This is followed by a comment on the latter case. Therefore, three types of comments combining tips are displayed in the comments section. tip-comment: a virtual tip with a message. tip-only: a virtual tip with no message. comment-only: Normal comments/chats like in a general SNS. For simplicity, we introduce a tip unit ρ ≥ 0, meaning that any tip is constant. 3.2
SNS-Norms Game with Tip and Quality
An SNS-NG/TQ game is performed on an undirected graph G = (A, E), where A = {a1 , . . . , aN } is the set of N agents that correspond to users and E is the set of edges representing social connections, such as friend relationships in SLSS/CGM. Therefore, live streaming or items streamed by ai can only be viewed by aj ∈ Nai , neighbor agents of ai , where Nai = {aj | (ai , aj ) ∈ E}. Like SNS-NG/MQ, ai ∈ A has the behavioral parameters, the streaming rate Bi (which corresponds to the posting rate in SNS-NG), the comment (and metacomment) rate Li (0 ≤ Bi , Li ≤ 1), and the quality of items (i.e., lives) Qi (0 < Qmin ≤ Qi ≤ 1), where Qmin (> 0) is the lower bound of the quality parameter. Thus, these parameters indirectly determine user utility through behavioral interactions with their neighbor agents. To express the tipping behavior, we introduce a parameter, the tipping rate Ti = (1 − λ) · Tint,i (where 0 < Ti , Tint,i < 1), for ∀ai ∈ A in the SNS-NG/TQ based on ai ’s tipping interest Tint,i . We assume that Tint,i indicates an individual’s intrinsic interest in tipping behavior and is not affected by the collection rate. By contrast, Ti reflects the collection rate; thus, agents will hesitate to tip acts if it is high. As Ti and/or Qj increase, ai is more likely to tip into aj , where aj ∈ Nai . Note that in the extreme case when λ = 1, all tips are collected, and thus, we assume that the tipping behaviors become meaningless for users. Similar to SNS-NG/MQ, we also use the parameter Mi (0 ≤ Mi ≤ 1), which indicates the degree of preference for monetary rewards relative to psychological rewards; thus, it is also intrinsic and unchanged. We classify the agents into two
120
S. Ueki et al.
Fig. 1. Flow of SNS-Norms Game with Tip and Quality.
subsets: Aα = {ai ∈ A | Mi < 0.5}, Aβ = {ai ∈ A | Mi ≥ 0.5}
(1)
The main difference from SNS-NG/MQ related to Mi is that monetary rewards are offered by the platform in SNS-NG/MQ; however, in SNS-NG/TQ, they are offered by other agents that also have different preferences for monetary rewards. Thus, tipping behavior incurs monetary costs for individual agents, and the behaviors of agents in Aα and Aβ are affected differently by Mi . For example, the agents in Aα may offer more tips if they gain more psychological reward. 3.3
Game Process
Agent ai on G = (A, E) performs SNS-NG/TQ with its neighboring agents. Figure 1 shows the process of one game round by agent ai . After viewing a live stream by ai , agent aj ∈ Nai first determines whether it offers a tip ρ to ai and then whether it posts a comment with the tip; we think that this order of determination is the same as the actual SLSS. If a tip is not offered, the remaining process is identical to that of SNS-NG/MQ. Note that if aj offers a tip, ai gains the remaining tip ρ · (1 − λ) after collection by the platform. The behaviors of agent ∀ai ∈ A in a round are determined by the behavioral parameters Bi , Li , Qi and Ti . In Stage 1, ai streams an item (live) with probability Pi0 , where Qmin Pi0 = Bi × . (2) Qi Eq. 2 indicates that agents have fewer chances of streaming for the elaboration of content if the quality specified by Qi is higher. Note that the agents’ actions in a game round proceed concurrently. When ai streams an item, it pays c0i = C 0 ·Qi . If ai does not stream an item, this round of ai ends.
Influence of Virtual Tipping and Collection Rate
121
Subsequently, agent aj ∈ Nai views the item streamed by ai with probabil1 = Qi /sj , where sj is the number of items streamed by the agents in ity Pj,i 1 1 = 0. Thus, Pj,i indicates that Naj in this round. When sj = 0, we define Pj,i higher-quality items are more likely to be viewed, although their quantities are smaller than those of lower-quality items. Then, aj who viewed the item, gains a psychological reward ri0 = R0 · Qi as the benefit. In Stages 2A and 2B of Fig. 1, agent aj who views the item streamed by ai 3 determines whether it offers a tip ρ to ai with the probability Pj,i = Qi · Tj . 3 (Recall Tj = (1 − λ) · Tint,j ); therefore, Pj,i is affected by the collection rate λ and the quality of the item. When aj decides to offer ρ, ai and aj enter Stage 2A; otherwise, they enter Stage 2B. In Stage 2A, aj offers ρ and ai gains a monetary reward ρ · (1 − λ) after collection by the platform. Subsequently, aj determines whether it will provide a comment (tip-comment) with probability 3 = Lj · Qi . Otherwise, aj offers only one tip (tip-only). Subsequently, they Pj,i enter Stage 3A. Meanwhile, in Stage 2B, aj also provides a comment (comment3 and they enter Stage 3B; otherwise, aj does nothing only) with probability Pj,i and this game round ends. If aj provides a comment in Stages 2A or 2B, it pays cost C 1 and ai obtains a psychological reward R1 for every received tip-comment and comment-only. In Stages 3A and 3B, whatever the case: comment-tip, tip-only, or commentonly, ai who streamed the item and received it from aj will reply with a metacomment to aj , with probability Pi4 = Li · Qi . Subsequently, ai pays the cost c2,tc , c2,t , or c2,c corresponding to a comment-tip, tip-only, or comment-only, respectively, and aj receives a psychological reward r2,tc , r2,t , or r2,c . In our model, we assume that the collection rate inversely affects the costs and rewards. c2,tc = (1 − λ) · C t + C c , c2,t = (1 − λ) · C t , c
2,c
c
=C ,
r2,tc = (1 − λ) · Rt + Rc , r2,t = (1 − λ) · Rt , r
2,c
=R
(3)
c
where C t and Rt are the base cost and psychological reward, respectively, for tip-only, and C c and Rc are those for comment-only. Note that in Eq. 3, the psychological rewards and costs related to tipping vary by λ, indicating that the collection by the platform diminishes the effect of providing the tip. The sequence of actions outlined above is referred to as a game round for a single agent ai ∈ A. One round for all the agents is called a game. If λ = 1, all tips are collected and there is no incentive to offer a tip. This means that agents never enter Stage 2A and SNS-NG/TQ is equivalent to SNS-NG/MQ with no monetary reward; this is called the SNS-norms game with quality (SNS-NG/Q). After each game, ai calculates its utility by ui = (1 − Mi ) × Ri + Mi × (Ki+ − Ki− ) − Ci ,
(4)
where Ri is the total psychological reward, Ki+ is the total monetary reward gained as tips, Ki− is the monetary cost for offering tips, and Ci is the other cost incurred by ai for other activities, such as live streams, comments, and
122
S. Ueki et al.
meta-comments during a game. The agent’s utility is used as the fitness value to evolve the parameters of the behavioral strategy. 3.4
Evolution of Behavioral Strategies for Individual Agents
An agent’s behavioral strategy, which is specified by Bi , Li , Qi , and Tint,i , must be learned individually because its appropriate behavior depends on its social position in the CGM/SLSS network (e.g., how many followers/friends agents have) and the strategies of its neighbor agents. Therefore, the agents’ strategies are diverse and uncommon for all agents. To evolve diverse strategies, we adopted the multiple-world genetic algorithm (MWGA) proposed by Miura et al. [8], which is a co-evolutionary algorithm that extends the conventional GA. The MWGA has already been used for SNS-NG [8] and SNS-NG/MQ [12]. The basic feature of the MWGA is that by having each agent try a variety of strategies, the appropriate strategy co-evolves with neighboring agents through diverse interaction experiences. First, the environmental network G = (A, E) is copied W times to generate W worlds (W is a positive integer). The l-th world is denoted by Gl = (Al , E l ). Each sibling ali ∈ Al in Gl corresponding to agent ai ∈ A has a different strategy, meaning that ali has its own values for Bi , Li , Qi , and Tint,i . Then, ali performs an SNS-NG/TQ game with its neighbors in Gl whose strategies are different in each world to which they belong. Therefore, W siblings of ai have different experiences and evolve their strategies by comparing their utility in different worlds. The process of the MWGA after generating multiple worlds consists of parent selection, crossover, and mutation, which are similar to GA, except for parent selection. Please refer to Miura et al. [8] for the detailed process of MWGA. Each behavioral parameter Bi , Li , Qi and Tint,i is encoded by a 3-bit gene. Note that Bi , Li , and Tint,i are interpreted as 7 discrete values, 0/7, 1/7, . . . , 7/7, but Qi expresses 1/8 (= Qmin ), 2/8, . . . , 8/8. The siblings of ai are likely to have different values for these parameters. One generation comprises of Ngen (≥ 1) games. An episode consists of g (≥ 1) generations. The fitness value of each sibling ali is the sum of utilities in a generation.
4 4.1
Experiments and Discussion Experimental Setting
We investigated the effects of virtual tipping on the agents’ individual behaviors in SNS-NG/TQ, particularly with T and their degrees, by comparing them with those in SNS-NG/Q, in which the tipping system is voided by setting λ = 1.0. We gradually increased the collection rate λ from 0 to 1 by 0.05 to examine its impact on media conditions and users’ individual behaviors, such as whether users are committed to quality of lives and how much the platform gains through collection, depending on the preference for monetary rewards and the number of followers/followees. The structure of the connections between agents is assumed to
Influence of Virtual Tipping and Collection Rate
123
Table 1. Parameter Values in Experiments. Description
Parameter Value
Number of agents
N = |A|
Number of agents preferring psychological reward |Aα |
1000 500
Number of agents preferring monetary reward
|Aβ |
500
Number of worlds in MWGA
W
10
Number of games in a generation
Ngen
10
Number of generations (episode length)
g
1000
Probability of mutation
m
0.01
Table 2. Reference Values of Costs and Rewards in Experiments. Reference
Parameter Value
Cost of streaming an item
C0
1.0
Cost of a comment
C1
0.5
2,t
Cost of a meta-comment only for a tip
C
Cost of a meta-comment only for a comment
C 2,c
0.5
Reward of viewing an item
R0
3.0
1
Reward of a comment
R
Reward of a meta-comment only for a tip
R2,t
Reward of a meta-comment only for a comment R
2,c
0.5
5.0 5.0 5.0
be a network generated by the connecting nearest neighbors model (CoNN) [15], which is based on the fact that friends of friends (potential edges) are more likely to become direct friends. CoNNnetworks satisfy the properties of scalefreeness, small-world, and high-cluster coefficients, which are usually observed in social networks. We set the transition probability from a potential edge to a real edge as u = 0.9 to generate the CoNNnetworks. The remaining values used in the experiments are listed in Tables 1 and 2. All the experimental results are the average values of the 1000-th generation of 100 experimental runs. The parameters are denoted as B, L, Q and T . 4.2
Strategies Under Various Collection Rates
We investigate how an agent’s behaviors change with an increase in the collection rate λ. We also define the following four subsets from Aα and Aβ to classify the agents according to their degrees: Aα,h = {a ∈ Aα | deg(a) ≥ 50}, Aα,l = Aα \ Aα,h Aβ,h = {a ∈ Aβ | deg(a) ≥ 50}, and Aβ,l = Aβ \ Aβ,h , where deg(a) is the degree of agent a ∈ A. Figure 2 shows the transitions of the average values of the four parameters B, L, Q and T of the agents in Aα,h ,
124
S. Ueki et al.
Fig. 2. Behavioral Strategy of Agents
Aα,l , Aβ,h and Aβ,l when λ was increased from 0.0 to 1.0. We recall that T = (1 − λ) · Tint . First, Fig. 2a indicates that the streaming rate B varies differently for various values of λ depending on the agent’s monetary preference. Although B was almost unchanged in Aα,h and Aα,l , it decreased significantly in Aβ , particularly in Aβ,l , with an increase in λ. This is because all agents could receive fewer tips (monetary rewards) as λ increased; thus, agents in Aβ,h and Aβ,l who prefer monetary rewards to psychological rewards, decreased B because of the lower incentive. In particular, agents in Aβ,l have fewer chances of receiving monetary rewards by tipping owing to their low degrees, and the tips are collected by the platform. Thus, their incentives to stream items decrease considerably, resulting in a significant decrease in their streaming rates. In contrast, agents in Aα could gain psychological rewards that were preferable for them, and the collection rate λ did not seriously affect their streaming behaviors. Another observation is that regardless of λ, agents with higher degrees have a higher value of B than those with lower degrees. This is because, when the agents in Aα,h and Aβ,h behave as live/video streamers, they have a higher chance of receiving psychological rewards through comments with/without tips. Therefore, they can maintain their streaming rates to some extent.
Influence of Virtual Tipping and Collection Rate
125
Second, Fig. 2b indicates that the collection rate λ has little effect on Q except for Aβ,l . Similar to the streaming rate B, agents in Aα are less interested in tipping and less affected by the quality of items. By contrast, the agents in Aβ,l wanted more monetary and psychological rewards. Therefore, they have fewer chances of streaming, which forces them to lower their quality to reduce the cost of streaming as the collection rate increases. From the results for B and Q, the agents who preferred monetary rewards were more willing to stream items, and the quality of the items was largely maintained when a tipping system with a reduced collection rate was used in SLSS. However, this did not affect the behavior of agents who preferred psychological rewards. Similarly, the comment rate L was also less sensitive to the collection rate; however, agents in Aβ who prefer monetary rewards, slightly decreased L and those in Aα who prefer psychological rewards, slightly increased as λ increased (Fig. 2c). Because the value B of the agents in Aβ gradually decreased with increasing λ whereas that of the agents in Aα remains unchanged, Aα received fewer items. To gain more psychological rewards, the agents in Aα attempted to increase their comment rates, but the increases were small. Conversely, the agents in Aβ decreased their L. In particular, the agents in Aβ,h motivated neighbors to offer as many tips as possible because of a meta-comment on the reward for the offer. However, an increase in collection rate λ inhibits this motivation. We also observed that agents with lower degrees tend to have higher values of L. This is because agents with high degrees incur more costs when L is higher, owing to many friends’ comments. Thus, they tend to have a lower L than agents with lower degrees. Lastly, Fig. 2d shows that the average tipping rate T of agents in Aα is higher than that of Aβ , regardless of their degrees, because Aα prefers psychological rewards and is less concerned with monetary loss due to tipping than are agents in Aβ . For B, L, and Q, the agents with higher degrees were more likely to receive psychological and monetary rewards. It is better for them to stream higher-quality items. By contrast, because the act of tipping is a one-on-one interaction with the offerer and streamer of the item, agents offer a tip ρ to one
Fig. 3. Average Utility of Agents and Rewards Gained by Platform.
126
S. Ueki et al.
streamer and receive one meta-comment as a psychological reward. Thus, for agents with high degrees, offering tips is relatively ineffective. Of course, this is independent of the monetary preference. 4.3
Agents’ Utility and Platform’s Gain
We investigate the change in the average utility U with an increase in λ. Figure 3 shows that the agents in Aβ,h have a larger decrease in U than other agents. Generally, agents with higher degrees gain larger monetary rewards from tips. Therefore, the utilities of agents in Aβ,h are significantly affected by the collection rate. Finally, we examined how the total monetary gain the platform collected varied with the collection rate λ; the results are plotted in Fig. 3b. Figures. 2a and 2b show that as λ increased, the agents in Aβ,l decreased their B and Q first, whereas the agents in Aβ,h decreased their B slowly. Because the agents in Aβ,h had more chances to receive tips owing to their higher degrees, the increase in collection rates did not significantly reduce their rewards. Thus, the platform’s rewards constantly increase with an increase in λ until approximately λ = 0.4. Then, the decrease in B of agents in Aβ,h causes the platform’s rewards to decrease, but a further increase in λ increases its gain, resulting in double peaks. Lastly, tipping gradually became inactive owing to the increase in λ for 0.6 ≤ λ ≤ 1.0. This decrease in B and increase in λ interacted and caused instability of the platform’s gained rewards near the peaks; thus, there may be some appropriate collection rates from the platform’s perspective.
5
Conclusion
We analyzed the influence of virtual tipping and its collection by the platform on user behavior in CGM such as SLSS from a game-theoretic approach. We first proposed the SNS-NG/TQ model to represent the quality of live streams and a tipping system, including collection by a platform. Then, we investigated how users changed their behaviors in a simulated environment, in which agents (users) are connected under the structures of CoNNnetworks, by classifying agents based on their preferences and the number of degrees. We compared the results with those of SNS-NG/Q, which has no tipping system. We found that the tipping system improved the activities of specific agents, resulting in better utility, and had a positive impact on agents who preferred monetary rewards with high degrees. Our results also suggest that there are appropriate collection rates for tips to maximize the platform’s rewards. This finding is derived from our theoretical model and is commonly applicable to any platform that employs the tipping system. Therefore, the results based on our model imply how real platforms should adopt the collection rate by appropriately setting the cost and reward parameter values, although only one type of virtual tip is examined in our experiments. However, the correctness of the model needs to be fully discussed because we did not compare the results of our experiments with empirical data in actual media.
Influence of Virtual Tipping and Collection Rate
127
In this study, tipping assumes an immediate exchange of monetary rewards and does not consider long-term effects. For example, subscription schemes are of the type where the money paid is returned in the form of information or added value after a certain period of time. These are widely used in SLSS and will be our next research target.
References 1. Axelrod, R.: An evolutionary approach to norms. American Political Sci. Rev. 80(4), 1095–1111 (1986) 2. Br¨ undl, S., Matt, C., Hess, T.: Consumer use of social live streaming services: The influence of co-experience and effectance on enjoyment. In: Proceedings of the 25th European Conference on Information Systems, pp. 1775–1791 (2017) 3. Chalakudi, S.N., Hussain, D., Bharathy, G., Kolluru, M.: Measuring social influence in online social networks - focus on human behavior analytics. In: Association of Marketing Theory and Practice Proceedings (2023) 4. Hilvert-Bruce, Z., Neill, J.T., Sj¨ oblom, M., Hamari, J.: Social motivations of livestreaming viewer engagement on twitch. Comput. Human Behav. 84, 58–67 (2018) 5. Hirahara, Yuki, Toriumi, Fujio, Sugawara, Toshiharu: Evolution of cooperation in SNS-norms game on complex networks and real social networks. In: Aiello, Luca Maria, McFarland, Daniel (eds.) SocInfo 2014. LNCS, vol. 8851, pp. 112– 120. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13734-6 8 6. Lee, Y.C., Yen, C.H., Wang, D., Fu, W.T.: Understanding how digital gifting influences social interaction on live streams. In: Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1–10 (2019) 7. Lim, J.S., Choe, M.J., Zhang, J., Noh, G.Y.: The role of wishful identification, emotional engagement, and parasocial relationships in repeated viewing of livestreaming games: a social cognitive theory perspective. Comput. Human Behav. 108, 106327 (2020) 8. Miura, Y., Toriumi, F., Sugawara, T.: Modeling and analyzing users’ behavioral strategies with co-evolutionary process. Comput. Social Netw. 8(1), 1–20 (2021) 9. Scheibe, K., Zimmer, F.: Game mechanics on social live streaming service websites. In: Proceedings of the 52nd Hawaii International Conference on System Sciences, pp. 1486–1495 (2019) 10. Sj¨ oblom, M., Hamari, J.: Why do people watch others play video games? an empirical study on the motivations of twitch users. Comput. Hum. Behav. 75, 985–996 (2017) 11. Toriumi, F., Yamamoto, H., Okada, I.: Why do people use social media? agentbased simulation and population dynamics analysis of the evolution of cooperation in social media. In: 2012 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, vol. 2, pp. 43–50. IEEE (2012) 12. Ueki, S., Toriumi, F., Sugawara, T.: Effect of monetary reward on users’ individual strategies using co-evolutionary learning. arXiv:2306.00492 (2023) 13. Usui, Y., Toriumi, F., Sugawara, T.: Impact of monetary rewards on users’ behavior in social media. In: The International Conference on Complex Networks and Their Applications, pp. 632–643. Springer, Cham (2021). https://doi.org/10.1007/9783-030-93409-5 52
128
S. Ueki et al.
14. Usui, Y., Toriumi, F., Sugawara, T.: User behaviors in consumer-generated media under monetary reward schemes. J. Comput. Social Sci. 6, 1–21 (2022). https:// doi.org/10.1007/s42001-022-00187-3 15. V´ azquez, A.: Growing network with local rules: preferential attachment, clustering hierarchy, and degree correlations. Phys. Rev. E 67(5), 056104 (2003) 16. Xu, Y., Ye, Y., Liu, Y., et al.: Understanding virtual gifting in live streaming by the theory of planned behavior. Human Behav. Emerg. Technol. (2022)
Information Spreading in Social Media
Algorithmic Amplification of Politics and Engagement Maximization on Social Media Paul Bouchaud1,2(B) 1
2
´ ´ Ecole des Hautes Etudes En Sciences Sociales, Paris, France Institut des Syst`emes Complexes Paris ˆIle-de-France, Paris, France [email protected]
Abstract. This study examines how engagement-maximizing recommender systems influence the visibility of Members of Parliament’s tweets in timelines. Leveraging engagement predictive models and Twitter data, we evaluate various recommender systems. Our analysis reveals that prioritizing engagement decreases the ideological diversity of the audiences reached by Members of Parliament and increases the reach disparities between political groups. When evaluating the algorithmic amplification within the general population, engagement-based timelines confer greater advantages to mainstream right-wing parties compared to their left-wing counterparts. However, when considering users’ individual political leanings, engagement-based timelines amplify ideologically aligned content. We stress the need for audits accounting for user characteristics when assessing the distortions introduced by personalization algorithms and advocate addressing online platform regulations by directly evaluating the metrics platforms aim to optimize, beyond the mere algorithmic implementation. Keywords: recommender systems regulation · Ideology
1
· algorithmic audits · social media
Introduction
In recent years, social media platforms have become critical arenas for political discourse, providing a platform for politicians, political organizations, and news outlets to engage with vast audiences. To optimize user engagement and advertising revenue, these platforms employ sophisticated deep-learning algorithms to curate and prioritize content [5]. The implications of such engagementmaximizing recommender systems have sparked intense scholarly debate and public scrutiny. Previous research has pointed out the potential influence of algorithmic ranking on political discourse by amplifying emotionally-charged content [12,21] and by contributing to polarization [8,13,14]. Nevertheless, limited access to large-scale data has posed challenges in fully investigating the effects of these algorithms. Independent audits of social networks have predominantly relied c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 131–142, 2024. https://doi.org/10.1007/978-3-031-53503-1_11
132
P. Bouchaud
on enlisting volunteers to provide their data [12,13,20] and on “sock-puppet audits” [22,23], in which one creates artificial users. While providing valuable insights, “sock-puppet audits” audits are hindered by the limitations in replicating human digital behavior [9] and the number of fake accounts researchers can create. On the contrary, enlisting volunteers may be susceptible to potential selection bias [2] and may offer less experimental control, yet they do showcase real-life effects of social media algorithms. Alternatively, studies carried out by, or in collaboration with, corporations offer valuable insights, benefiting from access to non-public data and the means to conduct large-scale experiments. For instance, by leveraging proprietary user information and conducting a multiyear controlled experiment involving almost two million users, Husz´ ar et al. [1] unveiled how Twitter’s recommender unevenly amplified tweets from politicians based on their ideological leaning. Recently, through a collaboration with Meta, Guess et al. [24] found that switching users from algorithmic feeds to reversechronological feeds significantly reduced their platform usage, increased their exposure to political content, to content from moderate friends and ideologically mixed sources on Facebook. Amid the growing regulatory scrutiny, such as with the Digital Services Act in the EU or the proposed Online Safety Bill in the UK, there emerges a pressing need to enhance methodologies for auditing social media platforms. This article aims to showcase the potential of empirically grounded social media simulations to both strengthen audit methodologies and assess prospective policies. Specifically, by leveraging engagement predictive models trained on behavioral data, we investigate different recommender systems on timelines, focusing on the visibility of tweets authored by Members of Parliament (MPs). Our study seeks to i) illustrate how the optimization of social media timelines for engagement leads to a reduction in ideological diversity among the audience of MPs, and ii) underscore the significance of accounting for individual user characteristics when auditing social media platforms.
2
Methods
In this section, we outline the development of engagement predictive models and their use to simulate timelines. Additionally, we introduce the amplification metrics employed to analyze the recommender systems. 2.1
Engagement Predictive Models
In order to develop predictive models for user engagement, the first step was to obtain a training dataset. Unfortunately, the extensive behavioral data needed for such training are proprietary assets of the companies and not publicly available. Although Twitter had collaborated with ACM RecSys to release extensive datasets for engagement prediction challenges [3,4], the sole focus was to enhance prediction accuracy, neglecting to address wider ethical implications; also the datasets were no longer accessible for our research. Consequently, we
Engagement-Based Political Amplification
133
compiled a training dataset using the Horus data-donation browser add-on [12], installed by 1.6k volunteers. During the first quarter of 2023, we collected over 2.5 million tweet impressions on volunteers’ timelines, recording 43k likes and 8k retweets. In addition to the captured tweets, we enriched our training dataset of historical and account specific features, such as the previously liked and retweeted tweets or its number of followers. To assess the similarity between Twitter accounts, we leveraged a network of follow large of 260 million nodes and 700 million edges, built via snowballing starting from half a million seed accounts. After pruning poorly connected nodes (degree less than 50), we used node2vec algorithm [15] to embed the network of follow into low-dimensional representations, while preserving neighborhood relationships. The network of follow allows to retrieve meaningful proximity between accounts and is notably leveraged by Twitter within its recommendation pipelines [11,29]. Building upon insights from the ACM RecSys Challenges [30], we trained gradient-boosting machines, specifically LightGBM [16], over a set of handcrafted features. During the training process, several features emerged as particularly significant to predict likes and retweets: the time delay between tweet publication and its impression, the average word length in the tweet, and the distance in the follow space between the tweet’s author and the last authors liked and retweeted by the user. For further information on the training process and the complete list of features, refer to [6]. Overall, our models achieved an average precision of 71.3% for predicting likes and 88.4% for predicting retweets, which, compared to a naive baseline, corresponds to a cross-entropy gain of 30.5% for likes and 63.5% for retweets. One should note that our aim is not to claim beating the current state-of-the-art with new models, but to train models that adequately predicts engagement for the purpose of the present demonstration. To evaluate the models’ generalization ability, we leveraged the findings from Barbiero et al. [17] and computed the convex hull of the training dataset in the feature space. Remarkably, 88.9% of the testing dataset samples lay outside the training set’s convex hull, necessitating the models to extrapolate for making predictions. The models achieved high accuracy for both testing points inside and outside the training convex hull, demonstrating successful generalization. Additionally, we found that the samples from the inference dataset (introduced later) closely resembled the training dataset, with an average cosine similarity with the closest training sample of 0.95. In summary, our models effectively generalized from the training samples, and the samples from the simulated timelines exhibited high similarity with the training data. 2.2
Timelines Simulation
Relying solely on publicly available data to craft model features and its ability to generalize, enables us to estimate the probability that a given user will like or retweet a given tweets, for wide range of Twitter accounts. We opted to simulate timelines for a set U of 2690 French public Twitter accounts. These accounts were sampled such as having the same distribution of political orientations as
134
P. Bouchaud
the general population (see below). We collected the likes and retweets history of each chosen account, along with the list of accounts it followed. To build the corpus of tweets that may appear in the simulated timelines, we collected the tweets published or retweeted by 96k accounts, among those followed by the users (covering on average 71% of users’ friends), over a 30-day period from February 18, 2023, to March 19, 2023. For each user, we simulated timelines by having them log in at random hours on seven randomly chosen days and reading a set of L tweets recommended by the system. Both the hour of connection and the session length L are drawn from the empirical distributions determined by our data-donation initiative (with an average session length of L = 35 tweets). To gain a deeper understanding of the mechanism in play, we implemented a range of recommender systems producing timelines with a fraction φ of tweets that maximize engagement and a fraction 1 − φ of the most recent tweets. To build the timelines, one first aggregates all tweets published or retweeted by the user’s friends (in our database) within the 18-hours period prior the user’s session. The aggregation window was chosen to balance the size of the tweet pool and the associated computational costs, see a sensitivity analysis in [6]. Using engagement predictive models, one estimates the probability of each tweet being liked or retweeted by the user if impressed in the current timeline. Following Twitter’s design [27], one then sums the two probabilities to obtain an engagement score. The user is then served a timeline made of the φL tweets with the highest engagement score and of (1 − φ)L most recent tweets. Furthermore, we incorporated a heuristic to guarantee that each account would only appear once within the timeline. This heuristic is stronger than the one employed by Twitter [29] and aim to bolster the robustness of our conclusions by counteracting possible trivial distorting effects. One should note that, this article does not address feedback loops —successive timelines are treated independently— nor second-order effects —arising for example from message retweeted by users’ friends, message that are also the result of algorithmic curation. Moreover, while we explore a range of recommenders spanning from reverse chronological feeds to those maximizing engagement, we do not view the former as “neutral”, chronological feeds are recency-based rankings that primarily reward individuals who post frequently. We chose such ranking as a baseline due to their simplicity and widespread adoption. 2.3
Metrics
Relative Amplification. In this study, we focus on the algorithmic amplification of tweets published by elected officials members of the French National Assembly. Adopting the approach of Husz´ ar et al. [1], we define the algorithmic reach amplification aR (T, U ) of a set T of tweets within an audience U as the ratio between the number of unique users in U who saw at least one tweet from T in their engagement-maximizing timelines and the number of unique users in U who saw at least one tweet from T in their reverse-chronological timelines; normalized such that an amplification ratio of 0% corresponds to an equal reach in engagement-based and reverse-chronological timelines: aR (T, U, d) =
Engagement-Based Political Amplification
135
I(f eedeng (u, d) ∩ T = ∅) u∈U − 1 with f eedeng,chrono (u, d) the set of tweets chrono (u, d) ∩ T = ∅) u∈U I(f eed figuring in the timelines, either engagement-based or reverse-chronological, of user u at day d. By focusing solely on tweets’ reach, the measure aR (T, U ) used by [1] may overlook distortions in terms of impressions i.e. how many times the tweet has been displayed on users’ timelines, particularly given the significant differences in political group sizes, see Table 1. To address this limitation, we introduce the impression amplification, aI (T, U ), as the ratio between the number of impressions of tweets from the set T in the engagement-based timelines of users in U and the number of impressions of tweets fromthe set T in the reverse-chronological |f eedeng (u,d)∩T | timelines of users in U : aI (T, U, d) = u∈U|f eedchrono (u,d)∩T | − 1. u∈U As in [1], we refer as group amplification the amplification of the set T of all tweets published by members of a given political party. Higher amplifications indicate that engagement predictive models assigns higher relevance scores to the set of tweets, leading them to appear more frequently than they would in a reverse-chronological timelines. We emphasize that both measures of amplification are relative to our choice of considering the reverse-chronological timelines as a baseline. To ensure the robustness of our analysis, we performed bootstrapping over the users and days of impressions for all the amplification measures, subsequently reported with 95% bootstrap confidence interval. To explore more thoroughly the consequences of engagement maximization, we form subgroups ˜ ⊂ U made of users with a particular political leaning. U Table 1. Political group membership at the French National Assembly (XVIth legislature), from left to right. The abbreviation corresponds to the main political party composing the group at the National Assembly. “LFI” stands for “La France Insoumise” (far-left) and “LIOT” for “Libert´es, ind´ependants, outre-mer & territoires” (catch-all group). We excluded from our analysis 5 non-attached MPs Group
Communist LFI Ecologist Socialist LIOT Democrat Renaissance Horizons Republican National Rally
Abbreviation PCF
LFI EELV
PS
LIOT MODEM RE
HOR
LR
RN
Members
75
27
21
26
59
88
22
23
51
162
Political Leaning. To determine the political leanings of Twitter accounts, we leveraged the retweet graph associated with all tweets from French political figures or containing French political keywords published in 2022. Such graph, collected within the Politoscope project, have been shown to be reliable indicators of ideological alignment [19]. While a clustering analysis of the retweet graph do provide categorical political leanings of each user, we seek a more granular analysis and adopted the methodology used in [6,12] to obtain a continuous opinion scale. Using the node2vec algorithm [15], we generated embeddings for each Twitter account within the retweet graph (filtering out users with less than 5 interactions
136
P. Bouchaud
Fig. 1. A) The graph of retweets associated with political tweets published in 2022, spatialized using ForceAtlas2 [25]. Nodes are colored based on the assigned numerical opinion. B) The distribution of assigned political leanings for Members of Parliament belonging to the far-left (LFI), left (PS), centrist (RE), right (LR), and far-right (RN) groups are presented. The distributions of opinion in the general population, i.e. all Twitter accounts in the political retweet graph, is displayed in black. C) The average position on the opinion scale of MPs in each political group is compared to the estimation made by political experts within the 2019 Chapel Hill Expert Survey [7]; error bars represent bootstrap standard errors on the average position.
over 2022), then capturing its underlying structure. We anchored the positions, on our opinion scale, of the far-left leader, Jean-Luc M´elenchon, and the farright leader, Marine Le Pen, to −0.75 and +0.75, respectively. For determining the opinion of the centrist leader, Emmanuel Macron, we averaged the opinion of the two opposite leaders, weighted by the angular similarities between their embeddings and Emmanuel Macron’s, resulting in an opinion of −0.02. For each Twitter account, we then computed the angular similarity between its embedding and the embeddings of the three leaders. The account’s political leaning was subsequently determined as the average opinion of the two closest leaders, weighted by their angular similarities. To account for the circular nature of the French political landscape, where anti-system activists bridge far-left and farright militants [10] (as shown in Fig. 1.A), we implemented a periodic boundary condition at ±1 in our numerical opinion estimates. This approach ensures interpretability, with negative values indicating left-leaning accounts, positive values indicating right-leaning accounts, and supporters of the current French President Emmanuel Macron being around zero. The numerical scale aligns with the political group of Members of Parliament, see Fig. 1.B, the assessment of political experts (2019 Chapel Hill Expert Survey [7]) see Fig. 1.C, and a clustering analysis [19]. A visual representation of the political landscape is shown in
Engagement-Based Political Amplification
137
Fig. 1, where nodes are color-coded based on their assigned numerical opinion, providing a clear interpretation of the political landscape. Audience Diversity. Having estimated the political leaning of each user, we examine the ideological composition of Members of Parliament’s audiences across various recommender systems i.e. different values of φ. Specifically, we compute the average absolute difference of opinion, taking account the periodicity at ±1, between a Member of Parliament and the users who have encountered their tweets, weighted by the number of impressions.
3 3.1
Results Relative Amplification
We display on Fig. 2 the reach and impression amplifications, within the general population, of tweets published by members of each political groups. The tweets from the governing centrist party Renaissance appear aI (TRE , U ) = 128.4%[123.3,134.6] more in the engagement-based timelines than in the recencybased ones, the tweets from the Republican party are amplified by aI (TLR , U ) = 29.8%[27.1,33.1] while those of Socialist MPs by aI (TP S , U ) = 3.3%[0.1,6.4] . By performing a permutation test over the MPs of the Socialist and Republicain groups, one conclude, with statistical significance p < .05, that the mainstream right-wing party benefit more from engagement-based timelines than left-wing party, both under reach and impression amplification metrics. To gain a deeper understanding of the underlying mechanisms, we segment users on their political leaning and investigate the reach and impression amplification of MPs’ tweets in the timelines of left-leaning users (Fig. 2.A), centerleaning users (Fig. 2.B), and right-leaning users (Fig. 2.C). Across all ideological subgroups, we observed that tweets from MPs who align with the users’ political views were amplified. This led to their tweets reaching a larger set of users (reach amplification) and appearing more frequently (impression amplification) in the engagement-based timelines compared to the recency-based timelines. Conversely, tweets from MPs with opposing political leanings were algorithmically reduced in visibility. For instance, tweets from ecologist MPs appeared, at least once, in the timelines of aR (Tecologist , U right ) = −84.2%[−88.5,−79.4] less right-leaning users in engagement-based timelines than in reverse-chronological ones. By conducting permutation tests over MPs either ideologically aligned or unaligned with the users, we conclude with statistical significance that ideologically aligned MPs benefit more from engagement-based ranking than dissonant MPs. In all examined instances, the reach amplification was found to be smaller than the impression amplification, and even overly conservative when considering MPs of the same leaning as the users. For example, when considering left-leaning users, the number of users encountering at least one tweet published by LFI MPs was equivalent for both engagement-based and recency-based timelines,
138
P. Bouchaud
Fig. 2. Reach (black circles) and Impression (orange squares) group amplifications. Amplification computed by considering the general population (A), left-leaning users (B), center-leaning users (C), and right-leaning users (D). Error-bars correspond to 95% bootstrap confidence interval. The political groups that aligns with the users’ ideology is shaded in grey for clarity.
aR (TLF I , U lef t ) = −1.98%[−3.24,−0.58] . However, the impression amplification showed a substantial increase, reaching aI (TLF I , U lef t ) = 58.6%[55.0,61.9] . This means that left-leaning users saw over half more tweets authored by LFI MPs in their engagement-based timelines compared to what they would have seen in recency-based timelines. Similar patterns were observed for center-leaning users with the major centrist party Renaissance (as shown in Fig. 2.B) and for rightleaning users with the right-leaning Republicain group (as shown in Fig. 2.C). 3.2
Audience Diversity
To gain insights into the audience reached by MPs’ tweets, we present on Fig. 3.A the average absolute difference of opinion between MPs and the users who encountered their tweets, as a function the fraction of engagement-maximizing tweets in the timelines, φ. Firstly, substantial differences in the ideological diversity of MPs’ audiences exist in reverse-chronological timelines, φ = 0. Tweets from MPs belonging to the centrist party (opinion around 0) and the far-right party (+0.75) are primarily seen by ideologically aligned users (see Fig. 3.C), with an average absolute difference of opinion of 0.183 and 0.182. Conversely, tweets from the mainstream left and right parties (±0.45) are impressed on the timelines of a more diverse set of users (see Fig. 3.B, D), with an average absolute of difference of 0.300 and 0.355. Secondly, as the timelines transition from recent
Engagement-Based Political Amplification
139
to engagement-maximizing tweets, there is a systematic decrease in the opinion diversity of the audiences reached by MPs, and an increase in the disparities between political groups. Compared to reverse-chronological timelines, at φ = 1 the ideological diversity of the audiences reached by centrist and far-right MPs’ tweets decreased by 42.9% and 41.7% while those of Socialist and Republican MPs decreased by 30.5% and 34.3%.
Fig. 3. A) Average absolute difference of opinion between Members of Parliament (MPs) and their respective audiences. The distributions of opinion among the audience of socialist (B), centrist (C), and republican (D) MPs are shown as a function of φ, representing the fraction of tweets maximizing engagement in their timelines.
4
Discussion
In this study, we employed engagement predictive models trained on behavioral data to construct timelines that users would have been exposed to under different content recommender policies. Following the work of Husz´ ar et al. [1], we focused on how tweets from Members of Parliament are presented on users’ timelines. As [1], we find that —when aggregating over the whole population— engagement-based timelines benefit mainstream right-wing parties more than left-wing parties. However, upon segmenting users based on their political leaning, we observed that engagement-based timelines merely favor tweets from
140
P. Bouchaud
authors ideologically aligned with the users, as discussed in [6,12]. These findings underscore the significance of taking into account individual user characteristics, here the political leaning, during the audit process. The research and regulatory implications of an audit revealing —through aggregating over the whole population— a platform-wide preference towards right-wing political content diverge from the implications of a study which —by considering users’ political leaning— uncover an algorithmic confirmation bias, where ideologically aligned content is amplified over discordant one. Also, we showcase how crucial is the definition of algorithmic amplification, e.g. focusing on the reach of tweets may overlook an algorithmic confirmation bias revealed by inspecting the number of impressions. Finally, exploring different recommender systems, ranging from reverse-chronological to engagement-maximizing, we uncovered that: as recommender systems prioritize engagement, the ideological diversity of the audience reached by different MPs decreases, while the disparities between political groups’s reach increase. It is important to note that our models are not trained to replicate Twitter’s recommender systems but rather to study the consequences of maximizing engagement. As a result, our simulations present noticeable differences with the findings of [1]. In our simulations, the reach amplification within the general population ranges from -22.0% to 88.2%. In contrast, [1] reported amplification consistently above 100%, reaching 153.1% for the Republicain group. This disparity may be attributed to the convoluted interplay of factors within Twitter’s recommender systems, which blend deep-learning models with hand-crafted heuristics. For example, Twitter assigns a reputation score to accounts[28], through a variant of the PageRank algorithm considering the number of followers and age of the account[26]; owing to such heuristics one may hypothesize that Members of Parliament might experience distinct treatment. Additionally, our timelines were exclusively composed of tweets published or retweeted by users’ friends, without any injected content, unlike on Twitter. Furthermore, we acknowledge changes in the French political landscape between [1] and our study, such as the entry of a large far-right group into the assembly and shifts in alliances among other groups. Despite these differences, the primary goals of our work remain twofold: first, to illustrate i) how optimizing for engagement unevenly amplify MPs tweets based on their ideology and leads to a reduction in ideological diversity among MPs’ audience; and ii) to underscore the importance of considering individual user characteristics while analyzing social-technical systems like social media, as it significantly impacts the conclusions of audits. Therefore, we advocate for finegrained audits to thoroughly assess the distortions introduced by personalization algorithms. Furthermore, in conjunction with conducting comprehensive audits of the algorithms implemented by the platforms, we advocate addressing the regulation of online services through an assessment of the metrics that platforms are optimizing, such as engagement. Acknowledgments. The author deeply thanks Pedro Ramaciotti Morales for his precious insights and Mazyiar Panahi for enabling the collection of the large-scale retweet
Engagement-Based Political Amplification
141
network. Finally, the author acknowledges the Jean-Pierre Aguilar fellowship from the CFM Foundation for Research, the support and resources provided by the Complex Systems Institute of Paris ˆIle-de-France.
References 1. Husz´ ar, F., Ktena, S., O’Brien, C., Belli, L., Schlaikjer, A., Hardt, M.: Algorithmic amplification of politics on Twitter. Proc. Natl. Acad. Sci. U.S.A. 119(1), e2025334119 (2021 12). https://doi.org/10.1073%252Fpnas.2025334119 2. Kmetty, Z., et al.: Determinants of willingness to donate data from social media platforms. (Center for Open Science, 2023, 3). https://doi.org/10.31219%252Fosf. io%252Fncwkt 3. Belli, L. et al.: Privacy-Aware Recommender Systems Challenge on Twitter’s Home Timeline (2020) 4. Belli, L. el at.: The 2021 RecSys Challenge Dataset: Fairness is not optional. In: RecSysChallenge ’21: Proceedings Of The Recommender Systems Challenge 2021. (2021 10). https://doi.org/10.1145%252F3487572.3487573 5. Satuluri, V., et al.: Proceedings of The 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2020 8). https://doi.org/10.1145 %252F3394486.3403370 6. Bouchaud, P. Skewed perspectives: Examining the Influence of Engagement Maximization on Content Diversity in Social Media Feeds. (2023 6). https://hal.science/ hal-04139494 preprint 7. Jolly, S., et al.: Chapel hill expert survey trend file, 1999–2019. Electoral Stud. 75 102420 (2022 2). https://doi.org/10.1016%252Fj.electstud.2021.102420 8. Rathje, S., Bavel, J., Linden, S.: Out-group animosity drives engagement on social media. Proc. Natl. Acad. Sci. U.S.A. 118 (2021 6). https://doi.org/10.1073 %252Fpnas.2024292118 9. Ribeiro, M., Veselovsky, V., West, R.: The Amplification Paradox in Recommender Systems (2023) 10. Chavalarias, D., Bouchaud, P., Panahi, M.: Can a single line of code change society? the systemic risks of optimizing engagement in recommender systems on global information flow, opinion dynamics and social structures. J. Artif. Soc. Soc. Simul. 27(1), 9 (2024). https://doi.org/10.18564/jasss.5203 11. Rossi, W., Polderman, J., Frasca, P.: The closed loop between opinion formation and personalized recommendations. IEEE Trans. Control Netw. Syst. Trans. Contr. Netw. Syst. 9, 1092–1103 (2022 9). https://doi.org/10.1109%252Ftcns.2021. 3105616 12. Bouchaud, P., Chavalarias, D., Panahi, M.: Crowdsourced audit of Twitter’s recommender systems. Sci. Rep. 13, 16815 (2023). https://doi.org/10.1038/s41598023-43980-4 13. Milli, S., Carroll, M., Pandey, S., Wang, Y., Dragan, A. Twitter’s Algorithm: Amplifying Anger, Animosity, and Affective Polarization (2023) 14. Bavel, J., Rathje, S., Harris, E., Robertson, C., Sternisko, A.: How social media shapes polarization. Trends in Cogn. Sci. 25, 913–916 (2021 11). https://doi.org/ 10.1016%252Fj.tics.2021.07.013 15. Grover, A., Leskovec, J.: node2vec. In: Proceedings Of The 22nd ACM SIGKDD International Conference On Knowledge Discovery and Data Mining. (2016 8). https://doi.org/10.1145%252F2939672.2939754
142
P. Bouchaud
16. Ke, G., et al.: LightGBM: a highly efficient gradient boosting decision tree. Advances in Neural Information Processing Systems, 30 (NIP 2017). (2017,12) 17. Barbiero, P., Squillero, G., Tonda, A.: Modeling generalization in machine learning: a methodological and computational study (2020) 18. Milli, S., Pierson, E., Garg, N.: Balancing Value, Strategy, and Noise in Recommender Systems, Choosing the Right Weights (2023) 19. Gaumont, N., Panahi, M., Chavalarias, D.: Reconstruction of the socio-semantic dynamics of political activist Twitter networks-Method and application to the 2017 French presidential election. PLoS ONE ONE. 13, e0201879 (2018 9). https://doi. org/10.1371%252Fjournal.pone.0201879 20. Hargreaves, E., Agosti, C., Menasche, D., Neglia, G., Reiffers-Masson, A., Altman, E.: Biases in the facebook news feed: a case study on the Italian elections. In: 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis And Mining (ASONAM) (2018 8). https://doi.org/10.1109%5C%252Fasonam. 2018.8508659 21. Brady, W., Wills, J., Jost, J., Tucker, J., Bavel, J.: Emotion shapes the diffusion of moralized content in social networks. Proc. Natl. Acad. Sci. U.S.A. 114, 7313–7318 (2017 6). https://doi.org/10.1073%252Fpnas.1618923114 22. Bartley, N., Abeliuk, A., Ferrara, E., Lerman, K.: Auditing algorithmic bias on twitter. In: 13th ACM Web Science Conference 2021 (2021 6). https://doi.org/10. 1145%252F3447535.3462491 23. Bandy, J., Diakopoulos, N.: More accounts, fewer links. Proc. ACM Hum.-Comput. Interact. On Human-Computer Interaction. 5, 1–28 (2021 4). https://doi.org/10. 1145%5C%252F3449152 24. Guess, A., et al.: How do social media feed algorithms affect attitudes and behavior in an election campaign? Science. 381, 398–404 (2023 7). https://doi.org/10.1126 %252Fscience.abp9364 25. Jacomy, M., Venturini, T., Heymann, S., Bastian, M.: ForceAtlas2, a continuous graph layout algorithm for handy network visualization designed for the gephi software. PLoS ONE ONE. 9, e98679 (2014 6). https://doi.org/10.1371%252Fjournal. pone.0098679 26. Twitter TweepCred. GitHub. https://github.com/twitter/the-algorithm/blob/ main/src/scala/com/twitter/graph/batch/job/tweepcred 27. Twitter Source Code for Twitter’s recommendation algorithm: Heavy Ranker. GitHub. https://github.com/twitter/the-algorithm-ml/blob/main/projects/ home/recap 28. Twitter Twitter/the-Algorithm: Source Code for Twitter’s recommendation algorithm. GitHub. https://github.com/twitter/the-algorithm 29. Twitter Twitter’s recommendation algorithm. Twitter. https://blog.twitter.com/ engineering/en us/topics/open-source/2023/twitter-recommendation-algorithm 30. Twitter What Twitter learned from the Recsys 2020 challenge. Twitter. https:// blog.twitter.com/engineering/en us/topics/insights/2020/what twitter learned from recsys2020
Interpretable Cross-Platform Coordination Detection on Social Networks Auriant Emeric1(B) and Chomel Victor2 1
2
´ Ecole Polytechnique, Palaiseau, France [email protected] Institut des Syst`emes Complexes Paris Ile de France (ISCPIF) - CNRS, Paris, France [email protected]
Abstract. Numerous disinformation campaigns are operating on social networks to influence public opinion. Detecting these campaigns primarily involves identifying coordinated communities. As disinformation campaigns can take place on several social networks at the same time, the detection must be cross-platform to get a proper picture of it. To encode the different types of coordination, a multi-layer network is built. We propose a scalable coordination detection algorithm, adapted from the Louvain algorithm and the Iterative Probabilistic Voting Consensus algorithm. This algorithm is applied to the previously built multi-layer network. Users from different social networks are then embedded in a common space to link communities with similar interests. This paper introduces an interpretable and modular framework used on three datasets to prove its effectiveness for coordination detection method and to illustrate it with real examples. Keywords: Coordination detection Networks
1
· User Alignment · Social
Introduction
Although disinformation campaigns always existed, they are now ubiquitous on online social networks (OSN). Disinformation is understood here as purposefully spreading misleading or inaccurate information in order to influence public opinion. These campaigns use various techniques such as creating deepfakes, fake news, or astroturfing. Some examples include the 2016 US presidential election [8], the COVID-19 pandemic [25], or the recent Spamouflage Operation [3]. Among disinformation tactics, the focus here is on Coordinated Inauthentic Behaviours (CIB) as defined by Meta [1]. It is based on using multiple social media accounts to mislead the online debate [25]. These campaigns are seen as a threat to freedom of speech and the war on them is intensifying. OSNs, such as Facebook, are at least since 2018 studying how to prevent them and are regularly c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 143–155, 2024. https://doi.org/10.1007/978-3-031-53503-1_12
144
A. Emeric and C. Victor
deleting troublesome accounts. Between 2017 and 2021, Facebook identified 17 billion fake accounts [23]. Similarly, independent and state-run organisations, such as the EU Disinfo Lab, are fighting against disinformation by detecting these campaigns and alerting the relevant authorities [2]. However, CIB detection is not an easy task. Users taking part in such campaigns may be split into different OSNs [17,36], which complicates the detection of the whole operation. In addition, coordination and inauthenticity are very different concepts. Coordination can be defined as the organisation, intentional or not, of actors to achieve common goals. On the other hand, a user is considered inauthentic if his behaviour falls outside the norms observed on the network: for instance troll farms or bots. Consequently, these notions can be unrelated: activists’ accounts on OSNs can spread disinformation on the same subject without being coordinated, while a group of users can have a coordinated but authentic behaviour, as coordination arises naturally from shared interests or opinions. The issue of authenticity is not directly discussed here as the focus is on coordination detection which is a mandatory first step in the process of CIB detection. When needed, the inauthenticity is assessed manually but this question has already been addressed in similar cases [29,34]. Contributions. In this article, a technical framework is proposed to detect coordinated communities spanning across multiple OSNs. First, a multi-layer network adapted to coordination detection is built from the collected interactions between users. This network encodes the various behaviours and types of coordination through graphs of interactions with various time thresholds. Then, a scalable community detection algorithm, adapted from a consensus clustering algorithm, is applied to this multi-layer network to get communities. Finally, similar users from different OSNs are found by embedding them in a shared latent space without using prior knowledge of identical users on several OSNs. These users can then be used to link communities with shared interests.
2
Related Work
The detection of CIB is an understudied topic from an academic point of view but may overlap with bot detection [18] or fake-news detection [37], including multimodal and cross-platform methods [22]. Generative models can be used to generate embeddings used for coordination detection [33] but are not easily interpretable. A common way of doing such detection while keeping the interpretability is to look at content propagation and to study graphs of interactions. In this context, an OSN can be represented as a multi-layered network [28], a widely used object [11]. Each action, such as mention or quote, is considered separately to build interaction or co-interaction graphs [28,34]. In interaction graphs, the edges represent the actions and are directed from the user to the object of the action. In co-interaction graphs, the nodes represent the users and two nodes are linked if their users performed the same action. The temporal dimension of the co-interaction can be encoded in the edges of the graph. To
Interpretable Cross-Platform Coordination Detection on Social Networks
145
this end, time thresholds are introduced. Two nodes are neighbours if the users did the same action within a limited time period [34]. Various methods have been studied to detect communities on a multi-layer graph. The simplest would be using a single layer, such as the co-retweet layer [14], collapsing the multi-layer graph to a simple graph by summing the weight of the edges [35] or creating a similarity graph using all the layers [4,15]. Classical community detection algorithms such as the Louvain algorithm [7] can then be used to detect communities [24]. These methods are straightforward but involve a loss of information. To avoid this, community detection algorithms adapted to multi-layer graphs can also be used but are often adapted to small graphs [16]. Features, such as the mean degree or the main K-Core number for each layer, can be extracted for each community. Embeddings from deep learning models could also be used to replace or to enrich feature vectors. These vectors are finally used to train a classifier to detect coordination [29,34]. Coordinated campaigns are not limited to a single platform [17,36]. To the best of our knowledge, cross-platform coordination detection methods exist but their results cannot be easily interpreted [38]. In order to go further and perform interpretable multi-platform community detection, the various multi-layer networks of each platform need to be aligned. To do so, a cross-platform network user identification is done, which means finding users with the same identity on different social networks. Digital footprints such as username, description, profile picture, or even location can be used to find some correspondences [21]. Another method is to perform graph alignment by using pre-aligned users across the networks, called anchors, and deep learning methods [13,19,20]. These aligned users can then be used to find similar communities from different social networks.
3
Dataset
Community Detection Benchmark. A first dataset called politic-ie [15], was used to evaluate the multi-layer community detection algorithm. This dataset is a 9-layer network created with 267K tweets from 348 Irish politicians and political organisations split into 7 communities according to their political affiliation. Cross-platform Dataset. Two datasets from different OSNs are used to study community alignment. The first is the Pushshift telegram dataset [5]. This dataset consists of 317M messages sent on 27,801 channels by 2.2M users between 2015 and 2019. The second is a Twitter dataset [12], which is composed of 88M tweets from 150K users. This dataset was used as it is one of the largest directly available, regarding the recent restrictions on Twitter dataset availability. Both datasets contain messages on a wide range of topics. In order to have comparable data volume, only messages sent in January 2019 are used. In addition, on Telegram, messages from chats are ignored, only messages from public channels are kept. Messages without text and links are removed; these messages often contained photos or videos that were not included in the original dataset. After this filtering step, the Twitter dataset contains 421K messages from 32K users and the Telegram dataset contains 624K messages from 9K users.
146
A. Emeric and C. Victor
Fig. 1. Overview of the proposed framework in the case of two OSNs
Ukraine War Dataset. To present an application of the method, the algorithms were tested on another dataset related to an ongoing event. This dataset is composed of about 1M messages from three different OSNs: Twitter (440K Tweets), Telegram (424K messages), and VK (191K). This dataset is built from messages containing a list of keywords related to the Ukraine War posted in May and June 2023 and is composed mainly of messages in English and Russian.
4
Method
The method consists of two independent algorithms applied successively and explained in the following sections. The results of both algorithms are finally used to link communities with shared interests. 4.1
Multi-layer Network Community Detection
Multi-layer Network. Through OSNs, users can interact with each other by performing a variety of actions such as posting messages, hashtags, or even links. Here, an OSN is represented as a multi-layer network. Each layer is an undirected graph corresponding to an interaction and a time threshold δ. Two nodes are linked if they did the same action within δ seconds from each other. The studied interactions are the following: co-message, co-share (for instance, co-retweet on Twitter), co-hashtag, co-URL and co-domain. Two layers, the co-message and co-domain layers are special. The co-message layer is based on an embedding of the message obtained with the Sentence-Bert Model [30]. In this layer, two nodes are linked if their embeddings have a very high cosine similarity value. This layer is used to detect extremely similar content, such as copy and paste with few modifications. The co-domain layer uses the domain name of the URLs instead of the full URLs and can therefore detect users following similar media. The weight of the edges is the natural logarithm of the number of co-interactions within the time threshold between the two accounts. This reweighting avoids having extremely heavy edges that could interfere with the general clustering. The layers are filtered to remove natural interactions between users. First, co-interactions too often performed are ignored: retweeting a mainstream article does not provide useful information about coordination as it is widely shared.
Interpretable Cross-Platform Coordination Detection on Social Networks
147
Then, the lowest weighted edges are removed from each layer to suppress weak or spurious connections. To avoid using an arbitrary weight threshold, a fixed percentage of edges is removed from each layer. Finally, nodes of degree zero are removed from each layer. As a result, many nodes are not present on all layers. Community Detection. The final clustering is performed on all the nodes of the multi-layer network, i.e. the union of the nodes of all layers. Given the large number of nodes, the community detection algorithm needs to be fast and memory efficient so each layer is first processed independently using the Louvain algorithm [7]. For each layer l, it returns of partition function πl such that, for a node x, πl (x) = k means that, in the layer l the node x is in the cluster k. Then, an Iterative Probabilistic Voting Consensus (IPVC) algorithm is used [26]. This algorithm aims at finding a clustering, called consensus clustering, by minimising the average probability that two nodes do not belong to the same community in a layer if they are from the same community in the consensus clustering (see 1). ∗
π (x) = arg min
i∈[[1:m]]
L
P (πl (x) = πl (x )|x ∈ X, π ∗ (x ) = i)
(1)
l=1
with π ∗ the consensus clustering, m the number of clusters, L the number of layers and X the set of nodes. This method aims at retaining as much information from each layer as possible while being adapted to a large amount of data. As layers do not have the same importance to detect coordination, the arithmetic mean of the probabilities for each layer can be weighted. Weights are selected as the values of the first singular vector of the distance matrix between the clusterings of each layer. A constant based on the minimum value is finally added to ensure that all weights are strictly positive. The distance used is the Network Partition Distance [9], which corresponds to the fraction of nodes that best-matching communities in two layers do not have in common. This matrix shows which layers provide different coordination information. Its first singular vector sums up the relations between the layers. A layer containing information different from the others will have a higher weight and thus a greater impact. 4.2
Cross-Platform Community Alignment
The second step is to match up similar communities. Here, communities are considered similar if they have similar users. Similarity Graph. The method used is inspired by the Sim-Clusters algorithm [31]. For each network, a bipartite graph composed of nodes representing accounts and domain names is created. The weight of an edge between an account and a domain name corresponds to the number of URLs containing the domain name posted by the account. Nodes representing domains that are too common or that do not carry information, for example, URL shorteners or OSNs (such as twitter.com), are removed from the graph. Finally, the edges are normalised so that each node representing a user is of degree one.
148
A. Emeric and C. Victor
Then the domain similarity graph is computed. This graph is a weighted undirected graph whose nodes are domain names. The edges are weighted according to the cosine similarity between the columns of the adjacency matrix of the bipartite graph. Edges with weight inferior to a threshold are discarded to select meaningful links and remove noise and spurious connections. Common Domain Latent Space. Users on various OSNs often post links from identical domains, which means common nodes in similarity graphs. The set of pairs of nodes representing the same domain name on different OSNs is noted N . These nodes are used as anchors to compute a common latent space for all similarity graphs. To do so, similarity graphs of each OSN are linked by adding edges between pairs of nodes, leading to a cross-platform similarity graph. An embedding of the nodes of this cross-platform similarity graph is then computed using Spectral Embedding [6]. The embedding of a node n is noted en . This relatively simple embedding allows discrimination of nodes present in different connected components and does not involve any prior training. As the pairs of nodes of N , represent the same domain name on different OSNs, their embedding must be highly similar. To maximise this similarity without altering the one between embeddings in a given OSN, the embeddings of each OSN are multiplied by an orthogonal matrix O∗ , computed by solving the Orthogonal Procruste Problem for the embeddings of the anchor nodes [32]: ||Oen − em ||2 (2) O∗ = arg min OOT =I
(n,m)∈N
User Embedding. The user embeddings are defined as the frequency-weighted average of the embedding of the domain names they posted. These embeddings are only computed for users who posted more than a minimal number of messages with links. The embedding of users with too few messages might be of poor quality if these messages do not reflect their usual behaviour. Community Linking. Pairs of users with high similarity across OSNs, or from the same OSN can be created. These pairs are then used to quantify the proximity between the communities previously detected. The similarity measure between two communities is measured with the proportion of users paired in each community and the number of pairs over the number of users paired. These metrics illustrate the density of connections between the communities. 4.3
Overview of the Framework
An overview of the framework for two OSNs is shown in Fig. 1. First, for each OSN a multi-layer network is created from the various co-interaction. Then community detection is performed on each network on each layer and combined using the IPVC algorithm. Then the similarity graphs, defined Sect. 4.2, are also created for each OSN. These graphs are linked through anchor nodes to create a
Interpretable Cross-Platform Coordination Detection on Social Networks
149
cross-platform similarity graph which is then embedded using Spectral Embedding. The similarity between the embeddings of anchor nodes is maximised by using an orthogonal transformation. An embedding of the users is then obtained by computing the average of the embeddings of the domain names. Finally, communities sharing users with highly similar embeddings are linked. Table 1. Clustering results with 1000 simulations on the politic-ie dataset
5
Clustering method
AMIS
ARS
Single (best layer)
0.835 ± 0.016
0.891 ± 0.012
Single (worst layer) 0.035 ± 0.025
0.034 ± 0.024
Collapsed
0.761 ± 0.053
0.770 ± 0.092
Similarity
0.859 ± 0.008 0.893 ± 0.003
Consensus
0.852 ± 0.025
0.908 ± 0.038
Results
Community Detection Benchmark. The method presented in Sect. 4.1 was used on the dataset politic-ie. To assess the performance of the method, two measures were computed on the detected communities: the Adjusted Mutual Info Score (AMIS) and the Adjusted Rand Score (ARS) [10]. These two scores are commonly used to measure agreement between partitions, and are here adjusted (the expected value is zero when the partitions are made at random). Other community detection methods were tested such as: using the Louvain algorithm on a single layer, collapsing the network by summing edges’ weights of each layer before using the Louvain algorithm, or computing a similarity graph [4]. Each method was applied 1000 times to get means and standard deviations (Fig. 1). The quality of the community detection using a single layer is highly variable, proving the benefits of using a multi-layered approach. The method using the collapsed graph is worse than the method with the best layer but easier to use, as there is no need to choose a layer. Similarity and consensus methods provide the best results and their performances are similar and depend on studied metrics. Our consensus method can thus be used on multi-layer networks. Multi-layer Network. In this paragraph and the next two, the dataset used is the Cross-platform Dataset. Various layers are presented in Fig. 2: two graphs corresponding to co-domain, respectively with a time threshold of an hour and a day, the third graph represents co-mention with a time threshold of a day. As expected, the co-interaction graph with a time threshold of an hour has a lower density than the others. Fewer interactions occur within a time threshold of an hour than within the course of a day. The information on this graph is therefore important, it indicates more coordinated users. However, this graph
150
A. Emeric and C. Victor
also contains fewer nodes, the other graphs are needed to identify the complete communities. The graphs of co-mention and co-domain also help in this way. By combining these graphs, more finely tuned communities can be identified.
Fig. 2. Various graphs of co-interaction computed on the Twitter dataset of the Crossplatform dataset. The node colours correspond to communities obtained with the Louvain algorithm. Left. Co-domain in an hour Middle. Co-domain in a day Right. Co-mention in a day
Cross-platform Domain Name Embedding. Computing the embeddings is the next step. To have significant similarity measures between domain names, only names posted more than 10 times in our datasets are used to create the similarity graphs. This results in a graph with 1048 nodes for Twitter and 584 nodes for Telegram. These two graphs are then linked using 105 pairs of anchor nodes before doing a spectral embedding. After the orthogonal rotation, the mean similarity value between the two embeddings of an anchor is 0.999 and the minimal similarity is 0.992. A t-SNE of the embeddings is presented in Fig. 3. In addition to anchors embeddings, other users’ embeddings from different OSNs appear to be highly similar, some examples can be seen at the top of Fig. 3. A post-hoc study shows that these nodes represent users with shared interests which validates the approach. In some places, for example the left of Fig. 3, only nodes from one OSN are present1 . This can be explained by the fact that the topics covered by the dataset do not totally overlap. To ensure that the embeddings are meaningful, the domain name extensions are studied. The embeddings corresponding to the 10 most frequent domain name extensions (except .com, .org, and .net), are shown in Fig. 3. The embeddings appear to be gathered by extension, for example at the bottom with the .ir extension or at the top with the .de extension. They are therefore relevant, as they enable identification of the geographical origin of a website. This gives us good hope that both graphs are embedded in a common meaningful space. Community Linking. Finally, communities from the two OSNs can be matched using the metrics detailed paragraph 4.2. The community on Telegram speaks 1
It should be noted that once communities have been matched, OSNs communities structures can be studied in detail for other purposes.
Interpretable Cross-Platform Coordination Detection on Social Networks
151
Russian, while the Twitter one speaks English. Both communities discuss about streaming video games and PlayStations. These communities share common interests which are detected despite language differences. Two metrics are used to assess the quality of matching. Messages related to co-interactions in each community are extracted and embedded using the Sentence-Bert model [30]. This embedding is then used to compute cosine similarities between communities. The average similarity between this gamer Telegram community and other communities on Twitter is 0.02 (with s.d. of 0.08), meanwhile, the similarity between the two matched communities is 0.44.
Fig. 3. t-SNE of the domain names embeddings obtained with two OSNs on the Crossplatform Dataset. The metric used is the cosine similarity. Only domain name embeddings whose extension is among the 10 most frequent are kept. Left. OSNs of origin of the domain name. Right. Extension of the domain name
A similarity between two users can also be defined as the maximal cosine similarity between the messages of these users. This user similarity can then be averaged to obtain a similarity between communities. The community similarity between the two linked communities is 0.215 meanwhile the average similarity between one of the linked communities and the communities of the other OSN is 0.072 (with s.d. of 0.047). These metrics confirm that the heuristic used to match communities is meaningful from a semantic point of view but the alignment performed goes further than this. Users, and communities, that are brought together are those sharing information sources and therefore narratives, hence the semantic similarity. Detected Coordination Example. The framework was also applied to the Ukraine War Dataset. All the layers presented in Subsect. 4.1, or their equivalent on the corresponding OSN, coupled with three time thresholds (a minute, an hour, and a day) were used to create the multi-layer network. On each OSN, several types of coordinated communities were detected such as: – Bot-like users sharing a similar narrative at the same time. This community creates a clique in the co-message layer with a threshold of a minute. The
152
A. Emeric and C. Victor
coordination is confirmed by the near-identical user description. These facts suggest a communication agency or a troll farm. – A community of Russian embassy accounts and unknown users sharing proRussian content. In this case, the coordination appears because of the heavy edges on the various graphs with thresholds of an hour or even a day. – Users whose only activity is to relay articles published by a given newspaper. These users are neighbours in the co-URL layers. The embedding of domains and users also helps identify useful connections. For example, in the case of domain embeddings, American conspiracy websites have highly similar embeddings. Hence, at the user level, people who regularly promote these media have similar embeddings. Linking the coordinated communities to which these users belong, brings together communities with common narratives. When these narratives are part of disinformation campaigns, it is legitimate to assume that these linked communities are coordinated. Another interesting example is the fact that the embeddings of journalists are very similar to the embeddings of the newspaper they are working for. These embeddings therefore connect the different accounts of the newspapers and their journalists on the different OSNs. In this case, the coordination is obvious.
6
Discussion
In this article, the process of creating a multi-layer network encoding the various types of coordination was presented. Communities detected on various OSNs were then matched using domain similarity graphs built from the links posted by the users. Our clustering method was proven to be reliable on a labelled dataset. Then, two cross-platform datasets were used to illustrate the coordination detection and the community matching with examples. Finally, various metrics were introduced to demonstrate the effectiveness of the method. Different types of coordination have been detected between journalists, official entities, or users suspected of being bots. In addition, coordinated communities on different networks have been brought together as they post links from similar websites. The examples shown were players from different countries who spoke different languages, conspiracy theorists, or journalists and their newspapers. Once communities have been linked, the various co-interactions between the users and their temporality can easily be observed on the multi-layer graph. This observation enables an analyst to identify the type of coordination and thus assess its authenticity. However, this method does not detect the source of coordination. For example, if a user asks others to retweet him. The coordination related to the retweets can be detected but the user who posted the initial tweet will not be included in the community. Moreover, this method does not detect recurrent behaviours or co-interaction with a time span greater than the time threshold. To solve these problems, layers could be added to the multi-layer network, at the expense of temporal and spatial complexities of the method.
Interpretable Cross-Platform Coordination Detection on Social Networks
153
As stated in Sect. 2, once communities have been detected, features can be extracted to get an embedding per community. These embeddings have then been used, in other articles, to discriminate the communities into two categories: coordinated or uncoordinated communities [34]. This classification would complete the CIB detection process. Furthermore, training an explainable classifier, such as an Explainable Boosting Model [27], would allow to have a better understanding of the characteristic and of how these campaigns work. Finally, the proposed framework is extremely modular. The network layers, clustering and embedding algorithms or the community matching methods can be freely modified to adapt the method to the available resources.
References 1. Coordinated inauthentic behavior explained. https://about.fb.com/news/2018/ 12/inside-feed-coordinated-inauthentic-behavior/ 2. Doppelganger - media clones serving russian propaganda. https://www.disinfo.eu/ doppelganger/ 3. Raising online defenses through transparency and collaboration. https://about.fb. com/news/2023/08/raising-online-defenses/ 4. Alimadadi, F., Khadangi, E., Bagheri, A.: Community detection in facebook activity networks and presenting a new multilayer label propagation algorithm for community detection 33(10), 089 (1950). https://doi.org/10.1142/S0217979219500899 5. Baumgartner, J., Zannettou, S., Squire, M., Blackburn, J.: The pushshift telegram dataset 6. Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation 15(6), 1373–1396. https://doi.org/10.1162/089976603321780317 7. Blondel, V.D., Guillaume, J.L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks 2008(10), P10,008. https://doi.org/10.1088/17425468/2008/10/P10008 8. Bovet, A., Makse, H.A.: Influence of fake news in twitter during the 2016 US presidential election 10(1), 7. https://doi.org/10.1038/s41467-018-07761-2 9. Calatayud, J., Bernardo-Madrid, R., Neuman, M., Rojas, A., Rosvall, M.: Exploring the solution landscape enables more reliable network community detection 100(5), 052308. https://doi.org/10.1103/PhysRevE.100.052308 10. Chac´ on, J.E., Rastrojo, A.I.: Minimum adjusted rand index for two clusterings of a given size 17(1), 125–133. https://doi.org/10.1007/s11634-022-00491-w 11. De Domenico, M.: More is different in real-world multilayer networks. https://doi. org/10.1038/s41567-023-02132-1 12. Enryu: Fun with large-scale tweet analysis. https://medium.com/@enryu9000/funwith-large-scale-tweet-analysis-783c96b45df4 13. Gao, S., Zhang, Z., Su, S., Yu, P.S.: REBORN: transfer learning based social network alignment 589, 265–282. https://doi.org/10.1016/j.ins.2021.12.081 14. Graham, T., Bruns, A., Zhu, G., Campbell, R.: Like a virus: the coordinated spread of coronavirus disinformation 15. Greene, D., Cunningham, P.: Producing a unified graph representation from multiple social network views 16. Huang, X., Chen, D., Ren, T., Wang, D.: A survey of community detection methods in multilayer networks 35(1), 1–45. https://doi.org/10.1007/s10618-020-00716-6
154
A. Emeric and C. Victor
17. Jakesch, M., Garimella, K., Eckles, D., Naaman, M.: Trend alert: How a crossplatform organization manipulated twitter trends in the indian general election 5, 1–19. https://doi.org/10.1145/3479523 18. Kudugunta, S., Ferrara, E.: Deep neural networks for bot detection 467, 312–322. https://doi.org/10.1016/j.ins.2018.08.019 19. Lei, T., Ji, L., Wang, G., Liu, S., Wu, L., Pan, F.: Transformer-based user alignment model across social networks 12(7), 1686. https://doi.org/10.3390/ electronics12071686 20. Liu, L., Zhang, Y., Fu, S., Zhong, F., Hu, J., Zhang, P.: ABNE: an attention-based network embedding for user alignment across social networks 7, 23,595–23,605. https://doi.org/10.1109/ACCESS.2019.2900095 21. Malhotra, A., Totti, L., Meira Jr., W., Kumaraguru, P., Almeida, V.: Studying user footprints in different online social networks. https://doi.org/10.48550/arXiv. 1301.6870 22. Micallef, N., Sandoval-Casta˜ neda, M., Cohen, A., Ahamad, M., Kumar, S., Memon, N.: Cross-platform multimodal misinformation: Taxonomy, characteristics and detection for textual posts and videos 16, 651–662. https://doi.org/10.1609/icwsm. v16i1.19323 23. Moore, M.: Fake accounts on social media, epistemic uncertainty and the need for an independent auditing of accounts. https://doi.org/10.14763/2023.1.1680 24. Morstatter, F., Shao, Y., Galstyan, A., Karunasekera, S.: From Alt-Right to AltRechts: Twitter analysis of the 2017 german federal election. In: Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW ’18, pp. 621–628. ACM Press. https://doi.org/10.1145/3184558.3188733 25. Murero, M.: Coordinated inauthentic behavior: An innovative manipulation tactic to amplify COVID-19 anti-vaccine communication outreach via social media 8, 1141416. https://doi.org/10.3389/fsoc.2023.1141416 26. Nguyen, N., Caruana, R.: Consensus clusterings. In: Seventh IEEE International Conference on Data Mining (ICDM 2007), pp. 607–612. https://doi.org/10.1109/ ICDM.2007.73. ISSN: 2374-8486 27. Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: A unified framework for machine learning interpretability 28. Pierri, F., Artoni, A., Ceri, S.: HoaxItaly: a collection of italian disinformation and fact-checking stories shared on twitter in 2019. https://doi.org/10.48550/arXiv. 2001.10926 29. Pierri, F., Piccardi, C., Ceri, S.: A multi-layer approach to disinformation detection in US and italian news spreading on twitter 9(1), 1–17. https://doi.org/10.1140/ epjds/s13688-020-00253-8 30. Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using siamese BERT-networks. https://doi.org/10.48550/arXiv.1908.10084 31. Satuluri, V., et al.: SimClusters: Community-based representations for heterogeneous recommendations at twitter. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’20, pp. 3183–3193. Association for Computing Machinery. https://doi.org/10.1145/ 3394486.3403370 32. Sch¨ onemann, P.H.: A generalized solution of the orthogonal procrustes problem 31(1), 1–10. https://doi.org/10.1007/BF02289451 33. Sharma, K., Zhang, Y., Ferrara, E., Liu, Y.: Identifying coordinated accounts on social media through hidden influence and group behaviours. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1441–1451. ACM. https://doi.org/10.1145/3447548.3467391
Interpretable Cross-Platform Coordination Detection on Social Networks
155
34. Vargas, L., Emami, P., Traynor, P.: On the detection of disinformation campaign activity with network analysis. https://doi.org/10.48550/arXiv.2005.13466 35. Weber, D., Neumann, F.: Amplifying influence through coordinated behaviour in social networks 11(1), 111. https://doi.org/10.1007/s13278-021-00815-2 36. Wilson, T., Starbird, K.: Cross-platform disinformation campaigns: Lessons learned and next steps 1(1). https://doi.org/10.37016/mr-2020-002 37. Zhang, C., Gupta, A., Kauten, C., Deokar, A.V., Qin, X.: Detecting fake news for reducing misinformation risks using analytics approaches 279(3), 1036–1052. https://doi.org/10.1016/j.ejor.2019.06.022 38. Zhang, Y., Sharma, K., Liu, Y.: Capturing cross-platform interaction for identifying coordinated accounts of misinformation campaigns. In: Kamps, J., et al. (eds.) Advances in Information Retrieval, Lecture Notes in Computer Science, pp. 694– 702. Springer, Heidelberg (2023). https://doi.org/10.1007/978-3-031-28238-6 61
Time-Dynamics of (Mis)Information Spread on Social Networks: A COVID-19 Case Study Zafer Duzen1(B) , Mirela Riveni2 , and Mehmet S. Aktas1 1
2
Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey [email protected], [email protected] Information Systems Group, University of Groningen, Groningen, The Netherlands [email protected]
Abstract. In our study, we investigate the persistence of misinformation in social networks, focusing on the longevity of discussions related to misinformation. We employ the CoVaxxy dataset, which encompasses COVID-19 vaccine-related tweets, and classify tweets as reliable/unreliable based on non-credible sources/accounts. We construct separate networks for retweets, replies, and mentions, applying centrality metrics (degree, betweenness, closeness) to assess tweet significance. Our objective is to determine how long tweets associated with noncredible sources remain active. Our findings reveal a noteworthy correlation: tweets with longer lifespans tend to be influential nodes within the network, while shorter-lived tweets have less impact. y shedding light on the longevity of misinformation within social networks, our research contributes to a better understanding of misinformation propagation dynamics. These insights can inform strategies to combat misinformation during public health crises like the COVID-19 pandemic. Keywords: misinformation · social networks network science · centrality metrics
1
· large scale networks ·
Introduction
The pervasive spread of misinformation in the recent years has become a pressing concern, particularly during public health crises. As the world grappled with the COVID-19 pandemic, social media platforms as powerful communication channels also emerged as one of the primary sources of dissemination of inaccurate and misleading information [12]. This study delves into the intricate analysis of misinformation using centrality metrics on a dataset obtained from Twitter. Specifically, our focus lies in understanding the duration of tweets from noncredible sources within the network, providing insights into their impact and significance. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 156–167, 2024. https://doi.org/10.1007/978-3-031-53503-1_13
(Mis)information Spread on Social Networks: A COVID-19 Case Study
157
To conduct this study, we leveraged the CoVaxxy dataset [20], a comprehensive collection of COVID-19 vaccine-related tweets. This dataset is publicly available only by IDs, so we hydrated the IDs, to recreate the dataset using the Twitter API. In order to differentiate between reliable and unreliable sources, we meticulously labeled tweets based on a curated list of non-credible sources/accounts. Our research seeks to uncover patterns and dynamics of misinformation propagation by constructing separate networks for source tweets, retweets, replies, and quotes. By employing centrality metrics, including degree, betweenness, and closeness, we evaluate the importance of individual tweets within the network. We also ran the Louvain community-detection method[2]. One of the fundamental aspects of our analysis involves assessing the longevity of tweets related to non-credible sources, and analysing if the number of reactions to a tweet also affects longevity of reactions. We explore how long tweets remain prevalent, as a crucial indicator of influence of the accounts that post them. Our findings reveal a correlation between the duration of a tweet’s survival and the posting account’s significance in the network. Tweets that persist for extended periods, and the accounts from which they are posted emerge as key players in the propagation of misinformation, while those with shorter lifespans demonstrate diminished importance. In addition to examining the lifespan of tweets, we also investigate temporal patterns in tweet activity throughout the day. By analyzing the times at which tweets are posted, we aim to identify potential patterns and variations in the dissemination of misinformation. Specifically, we delve into whether tweets from non-credible sources exhibit distinctive temporal patterns, providing valuable insights into the behavior and strategies employed by these sources in spreading misinformation. Thus, our main research question is: what insight can we get from a longitudinal analysis regarding network-related importance indicators in misinformation spread? By shedding light on the longevity of misinformation within social networks, our research contributes to a better understanding of the dynamics of misinformation propagation. These insights have the potential to inform the development of targeted strategies aimed at mitigating the spread and impact of misinformation in the context of public health crises, such as the COVID-19 pandemic.
2
Related Work
The issue of misinformation within social networks has attracted significant attention in recent years, especially in the context of the COVID-19 pandemic. Researchers have explored various aspects of misinformation spread, detection, and the effectiveness of interventions [7,18]. This section provides a comprehensive overview of the related work in this field. Numerous studies have examined the impact of fact-checking interventions on the propagation of viral hoaxes and false information. Authors in [1] discuss challenges in this area. Investigations have demonstrated that early detection
158
Z. Duzen et al.
and prompt fact-checking can play a crucial role in mitigating the spread and influence of misinformation [9,16,17]. By effectively curbing the reach and virality of hoaxes, fact-checking interventions have shown promise in countering false narratives within social networks [15]. Research has delved into the complex dynamics of online misinformation, examining not only the spread of false information but also the psychological and sociological aspects surrounding its creation and consumption [19,22]. In the research presented in [14], the authors investigate the multifaceted nature of misinformation, exploring the motivations and strategies of deceivers as well as the vulnerabilities and cognitive biases that make individuals susceptible to deception. By examining the interplay between deceivers and victims, the study sheds light on the mechanisms through which false information proliferates within online environments. This research contributes to a deeper understanding of the factors that perpetuate the spread of misinformation and emphasizes the importance of addressing both the misinformation spreaders and the ones susceptible to misinformation. Furthermore, researchers have examined the characteristics and behavior of influential users within social networks, often referred to as information leaders [8]. These individuals play a pivotal role in shaping public discourse and influencing the spread of information [11]. By understanding the dynamics of information leaders’ influence, researchers aim to devise strategies that can leverage these influential figures to counteract the spread of misinformation effectively [10]. The authors in [3] have studied metrics and their efficacy to identify super-spreaders, they have also introduced the FIB Index, a metric based on the h-index, which they state it is most efficient. The longitudinal analysis of misinformation during the COVID-19 pandemic has provided valuable insights into the changing patterns of information dissemination. Studies have examined the evolution of misinformation over time, tracking the emergence, spread, and persistence of false narratives. These investigations shed light on the lifespan of misinformation within social networks and provide a deeper understanding of the dynamics of information propagation during public health crises [13].
3 3.1
Methodology Data Collection and Dataset Description
The data utilized in this study was derived from the publicly available CoVaxxy dataset, a collection of English tweets pertaining to COVID-19 vaccination. The Observatory on Social Media at Indiana University monitors and analyzes the CoVaxxy dataset with the aim of comprehending the influence of online information on COVID-19 vaccine acceptance and its associated health implications [4]. This continuously updated and real-time database grants public access to an extensive repository of English-language tweets centered around vaccines. Spanning from January 2021 to January 2022, the dataset was procured by conducting daily queries using specific keywords. For the purposes of this study,
(Mis)information Spread on Social Networks: A COVID-19 Case Study
159
tweets sent between on March 1 and on March 31, were selected for retrieval attempts. To retrieve the tweets, the tweet IDs provided by the authors in [4] were employed.
Fig. 1. The figure provides an overview of the process that has been designed to analyze the spread of misinformation.
Access to the tweets was facilitated through the utilization of the Twitter API, specifically version 2, and hydrating the already available ids. Privacyrelated fields, such as name, full name, username, and location information, were excluded as they were deemed unnecessary for the analysis. Following each request, the data was stored in JavaScript Object Notation (JSON) files. The JSON file structure consisted of the ‘data’ property, housing the successfully downloaded tweets, the ‘errors’ property, encompassing responses that could not be answered, and the ‘includes’ field, containing supplementary information for successful response tweets. The data was stored in a daily folder with the respective date, using the JSON format, facilitating our streamlined processing with Python libraries. A Python script was executed for the selected day, processing all the JSON files within the folder to generate a comprehensive data report. 3.2
Longitudinal Analysis Methods
In this study, we utilize the modules of a software architecture that we have developed and are still working on, for an analysis process and utilize real-time social network data, including replies, mentions, and retweets, in order to detect the involvement of fake accounts in the spread of misinformation [6]. Our process consists of several modules: Data Collection, Data Preprocessing, Data Annotation, Network Creation, Centrality Calculator, Community Detection, and Misinformation Detection. Each module serves a specific objective and performs various functionalities within the process. In this study, Twitter data acquisition utilized the Twitter REST API, offering programmatic access to diverse Twitter data types, including tweets and user accounts. Academic research access, granting the capacity to query up to
160
Z. Duzen et al.
10 million tweets per month, was crucial for our research purposes. The collected data underwent preprocessing, with each tweet individually scrutinized to extract and store pertinent information in comma-separated values (CSV) format. Data annotation involved the application of automatic labeling, generating a distinct CSV file containing tweet ID, tweet text, and reliability flags. To detect misinformation, the dataset is compared to a list of non-credible sources obtained from the Iffy.news website1 , which rates news sources based on their track record of publishing accurate information. In alignment with the utilization of this website for the automated labeling of unreliable tweets in the study from which the Covaxxy Twitter data was originally acquired [4], the identical data source was employed in the present investigation. The Network Creation module models the dataset as a social network graph using retweets, mentions, and replies. Nodes represent individual usernames, and interactions are represented as edges. The module generates a data frame of edges, indicating the source and target nodes. Network centrality metrics are calculated using the Centrality Calculator module. Degree centrality, closeness centrality, and betweenness centrality are utilized to measure the importance of nodes in the network and identify influential members. Degree centrality is the simplest and most common measure of centrality. It counts the number of direct connections or edges a node has with other nodes in the network. Betweenness centrality measures how often a node lies on the shortest path between two other nodes. Closeness centrality measures how close a node is to all other nodes in the network [21]. Community Detection analyzes the graph outputs from the Network Creation module to identify communities within the Twitter dataset. By examining neighborhood information and extracting communities from retweets, mentions, and replies networks, this module provides insights into how misinformation spreads within different groups on social media. The Misinformation Survival Detection module serves as a pivotal component that assesses the temporal dynamics of tweets marked as non-credible within the analyzed dataset. The persistence of a tweet within the network indicates sustained user engagement and response. Noteworthy interactions include retweets, likes, replies, and mentions, which signify continued interest and dissemination of the tweet. This module investigates the occurrence of such reactions during the subsequent 20 d following the selected day, thereby ascertaining the duration of a tweet’s presence within the network. In order to classify tweets based on their survival duration, the module scrutinizes the longevity of each tweet. Tweets that remain observable within the network for a period ranging from 1 to 3 d are categorized as short-survived tweets, denoting relatively transient engagement. Similarly, tweets that endure for 4 to 6 d are classified as mid-survived tweets, indicating a moderate level of persistence. Tweets surpassing this time frame are classified as long-survived tweets, signifying sustained user responses that extend beyond the specified duration. 1
https://iffy.news/.
(Mis)information Spread on Social Networks: A COVID-19 Case Study
161
The Reporting module represents the conclusive component of the research framework, responsible for producing a comprehensive report on the survival dynamics of non-credible tweets within the retweet, mention, and reply networks for the chosen day of analysis. The generated report serves as a valuable resource for understanding the dissemination and impact of misinformation in these networks. A representative sample of the report is depicted in Table 1, showcasing pertinent information pertaining to each non-credible tweet.
4 4.1
Longitudinal Analysis Tweet Intensity
This study includes an analysis that examines the temporal patterns of noncredible tweets by investigating the specific time intervals in which the tweets are posted throughout the day. The objective is to determine whether the timing of non-credible tweets follows a distinct pattern compared to other tweets. To achieve this, the dataset “CoVaxxy” [20] was utilized, encompassing all tweets from the first day of each month between 2021 and 2022, as well as all tweets from March 2021. The timestamps of these tweets were collected and used to generate graphical representations, as illustrated in Fig. 2a, Fig. 2b, Fig. 3a and Fig. 3b respectively.
Fig. 2. Tweet sent time analysis for the first day of each month
The analysis of tweets as a collective entity, without filtering for specific days, reveals patterns in their temporal distribution. As observed in Fig. 2a and Fig. 3a, a consistent trend emerges, showing an increase in tweet activity during the evening hours, particularly between 5pm and 8 pm. This peak in tweeting activity may be attributed to various factors, such as users being more active and engaged on social media platforms during leisure hours or the prevalence of certain topics or events that drive higher participation. However, this is only our assumption. On the other hand, the lowest volume of tweets is observed between 4am and 6am, which may be attributed to reduced user activity during early morning hours when individuals are typically asleep or engaged in other offline activities.
162
Z. Duzen et al.
However, when examining tweets labeled as misinformation, a distinct pattern is not evident. Figure 2b and Fig. 3b illustrate that tweets originating from non-credible sources are shared throughout all hours of the day, without any noticeable variations in their temporal distribution. This lack of a discernible pattern suggests that misinformation, unlike overall tweet activity, does not exhibit a specific time-dependent trend. Instead, it highlights the continuous and pervasive dissemination of false information from non-credible sources at all hours, emphasizing the persistent nature of the misinformation problem, and adding to the complexity of dealing with misinformation spread.
Fig. 3. Tweet sent time analysis for the 10 consecutive days in March 2021
4.2
Mis(information) Longevity in Networks
In the pursuit of conducting a misinformation survival analysis, we carefully curated a subset of tweets, spanning the temporal interval from March 1, 2021, to March 31, 2021, for thorough examination. This endeavor was meticulously executed in a sequential manner, commencing with a focused examination of the dataset pertaining to the specific date of March 1. To ensure the data’s suitability for rigorous analysis, an initial preprocessing stage was undertaken, followed by the storage of the processed data in the CSV format. Subsequently, the data annotation module was employed to meticulously discern and categorize tweets that emanated from sources lacking credibility, thereby designating them as instances of misinformation. Furthermore, retweet, mention, and reply networks were constructed to represent the interactions and reactions on the designated day. Figure 3 shows the graphs for the selected day from the network creation and community detection modules which we visualize with matplotlib. For each network, essential network metrics such as degree, betweenness, and closeness centrality were computed for individual nodes, offering valuable insights into their prominence and influence within the network. In addition, the Louvain algorithm was implemented to identify distinct communities within each network, unveiling cohesive groups of nodes that exhibit higher internal connectivity.
(Mis)information Spread on Social Networks: A COVID-19 Case Study
163
Fig. 4. Network graphics depicting user interactions within the downloaded tweets for March 1, 2021, organized based on communities Table 1. A sample report for a misinformation labeled tweet on 2021-03-01.
Continuing the analysis, the cumulative reactions received by the tweets marked as misinformation were calculated over a span of 20 d following the selected date, providing an indication of the duration these non-credible tweets remained present within the network. Based on their respective lengths of persistence, we categorized the tweets into three survival types: a) short-survived (1 to 3 d), b) mid-survived (4 to 6 d), and c) long-survived (exceeding 6 d). By comparing this longevity information with the centrality metrics obtained for each node, it was possible to discern the significance of nodes associated with misinformation within their respective communities. For illustrative purposes, a representative report is presented in Table 1, focusing on data for the date 2021-03-01. Of particular note, our analysis revealed that nodes that were identified as prolific propagators of misinformation and were classified as “long-survived” within the network exhibited a proclivity to occupy influential positions within their respective communities. This observation strongly suggests the pivotal role played by such nodes in the dissemination of information, underscoring their consequential impact on the network dynamics. Conversely, we postulated a hypothesis positing that tweets categorized as “short-survived” misinformation would likely correspond to nodes of diminished importance within their respective communities. The veracity of this hypothe-
164
Z. Duzen et al.
sis was meticulously scrutinized within community structures harboring a substantial number of nodes, thereby enriching our understanding of the intricate interplay between misinformation and network centrality. To ensure the robustness of our findings, these analyses were repeated for each day until March 20, allowing for the validation and verification of the inferences. The accuracy of the hypothesis was assessed specifically within communities characterized by a significant number of nodes, highlighting the consistency of the findings in such contexts. However, it should be noted that in communities with a smaller number of nodes, the observations revealed a mixed pattern, with the correctness of the inferences varying depending on the specific case. In addition, we found that even long-survived tweets usually do not receive a reaction on the network after 20 d. Thus, one of our conclusion is that misinformation longevity is around 20 d. Additional results for the remaining days can be accessed through the application’s GitHub repository, ensuring comprehensive access to the complete set of outcomes [5].
Fig. 5. User reactions to non-credible tweets in next 20 d
Our analyses and the corresponding visual representations, bring us to a noteworthy observation and insight. Tweets originating from non-credible sources and labeled as misinformation tend to diminish in their impact and presence after the 5th day. This suggests a declining influence of such tweets within the network as time progresses, as it can also be concluded intuitively. In addition, it is worth to mention that tweets receiving prolonged reactions and engagements exhibit characteristics of influential nodes within the network. This finding confirmed the hypothesis posited earlier, where the duration of tweets from non-credible sources that hold significant centrality metrics values is longer, effectively contributing to the spread of misinformation. Conversely, tweets from nodes deemed not as significant demonstrate a swift disappearance from the network, indicating their limited role in perpetuating misinformation. Moreover, as can be seen from the total-reaction count to tweets in Table 1, we can not conclude from our analysis if the number of reactions affect tweet longevity in terms of reactions over time.
(Mis)information Spread on Social Networks: A COVID-19 Case Study
4.3
165
Short Discussion on Tweet Labels
We want to note that the non-credible sources whose tweets we label as misinformation tweets, are from a publicly available list at the Iffy index of Unreliable Sources. These are mainly media sources that are known to spread misinformation. We labeled the tweets according to that list, we did not conduct misinformation detection, and we do not claim that we detected that the tweets that we have labeled as misinformation are indeed such, but we used an already available list, which is widely used and seems to be appropriate for the type of analysis we have conducted.
5
Conclusion and Future Work
In conclusion, this research presents a longitudinal study on the spread of misinformation within social networks, focusing on the duration that misinformationrelated discussions remain prevalent. By analyzing a comprehensive Twitter dataset named CoVaxxy [4], which consists of COVID-19 vaccine-related tweets, the study labeled tweets as reliable or unreliable based on a curated list of noncredible sources/accounts. Separate networks were constructed for source tweets, retweets, replies, and mentions, and centrality metrics were applied to evaluate the significance of individual tweets within the network. Thus, we focused on the centrality metrics, longevity of reactions, and the number of reactions as key indicators of network influence of an account in spreading misinformation. The findings of this study revealed a correlation between the longevity of reactions to a tweet and its importance within the network. Tweets that persisted for longer periods were identified as influential nodes in the network, while tweets with shorter lifespans had less impact. By shedding light on the longevity of misinformation within social networks, this research contributes to a better understanding of the dynamics of misinformation propagation. These insights have the potential to inform strategies aimed at mitigating the spread and impact of misinformation in the context of public health crises such as the COVID-19 pandemic. Future work in this area can build upon the findings of this study and explore additional research directions. Firstly, investigating the temporal patterns of tweet activity throughout the day can provide further insights into the behavior and strategies employed by sources of misinformation. By analyzing the timing of non-credible tweets, it may be possible to identify distinct patterns and variations in the dissemination of misinformation. This knowledge can inform the development of targeted countermeasures. Furthermore, one of the goals of this study was to examine the characteristics and behavior of influential users within social networks, known as information leaders. We found that those who have a high score for some centrality metrics have also tweets to which there is a reaction activity for a longer time. Understanding the dynamics of information leaders’ influence can help identify effective approaches for leveraging these influential figures to counteract the spread
166
Z. Duzen et al.
of misinformation. In our future research we will explore more metrics and algorithms to identify super-spreaders. A comprehensive list of characteristics can help in developing efficient strategies for engaging with and neutralizing their influence. Our future work also includes applying the misinformation persistence timebased study to different datasets so that we can investigate if we can find similar patterns, and to generalize results across datasets. Moreover, we are working on designing an user interface, so that we can release a tool which would accept social network data, compute centralities and run clustering algorithms on these datasets. Additionally, investigating the effectiveness of fact-checking interventions, e.g., replies with corrected information, and their impact on the propagation of misinformation remains a critical area of study.
References 1. Ahvanooey, M.T., Zhu, M.X., Mazurczyk, W., Choo, K.K.R., Conti, M., Zhang, J.: Misinformation detection on social media: challenges and the road ahead. IT Professional 24(1), 34–40 (2022). https://doi.org/10.1109/MITP.2021.3120876 2. De Meo, P., Ferrara, E., Fiumara, G., Provetti, A.: Generalized louvain method for community detection in large networks. In: 2011 11th International Conference on Intelligent Systems Design and Applications, pp. 88–93 (2011). https://doi.org/10. 1109/ISDA.2011.6121636 3. DeVerna, M.R., Aiyappa, R., Pacheco, D., Bryden, J., Menczer, F.: Identification and characterization of misinformation superspreaders on social media. CoRR abs/2207.09524 (2022). https://doi.org/10.48550/arXiv.2207.09524 4. DeVerna, M.R., et al.: Covaxxy: a collection of english-language twitter posts about covid-19 vaccines. In:Proceedings of the International AAAI Conference on Web and Social Media 15(1), 992–999 (2021). https://ojs.aaai.org/index.php/ICWSM/ article/view/18122 5. Duzen, Z.: covaxxy-data-mining. https://github.com/duzenz/covaxxy-datamining (2023). Accessed June 9 2023 6. Duzen, Z., Riveni, M., Aktas, M.: Processes for misinformation spread-analysis on social networks: a covid-19 case study. IEEE CPS Xplore (2023). Accepted for publication 7. Lazer, D.M.J., et al.: The science of fake news. Science 359(6380), 1094– 1096 (2018). https://doi.org/10.1126/science.aao2998.https://www.science.org/ doi/abs/10.1126/science.aao2998 8. Massey, P.M., Kearney, M.D., Hauer, M.K., Selvan, P., Koku, E., Leader, A.E.: Dimensions of misinformation about the hpv vaccine on instagram: Content and network analysis of social media characteristics. J. Med. Internet Res. 22(12), e21,451 (2020). https://doi.org/10.2196/21451., https://www.jmir.org/2020/12/ e21451 9. Nakov, P., et al.: The clef-2021 checkthat! lab on detecting check-worthy claims, previously fact-checked claims, and fake news. In: Hiemstra, D., Moens, M.F., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds.) Advances in Information Retrieval, pp. 639–649. Springer International Publishing, Cham (2021) 10. Pastor-Escuredo, D., Tarazona, C.: Characterizing information leaders in twitter during COVID-19 crisis. CoRR abs/2005.07266 (2020)
(Mis)information Spread on Social Networks: A COVID-19 Case Study
167
11. Petratos, P.N.: Misinformation, disinformation, and fake news: Cyber risks to business. Business Horizons 64(6), 763–774 (2021). https://doi.org/ 10.1016/j.bushor.2021.07.012. https://www.sciencedirect.com/science/article/pii/ S000768132100135X. CIBER SPECIAL ISSUE: CYBERSECURITY IN CRISIS 12. Ratkiewicz, J., Conover, M., Meiss, M., Goncalves, B., Flammini, A., Menczer, F.: Detecting and tracking political abuse in social media. Proc.e Int. AAAI Conf. Web Social Media 5(1), 297–304 (2021). https://doi.org/10.1609/icwsm.v5i1. 14127.https://ojs.aaai.org/index.php/ICWSM/article/view/14127 13. Roitero, K., et al.: Can the crowd judge truthfulness? a longitudinal study on recent misinformation about COVID-19. Pers. Ubiquitous Comput. 27(1), 59–89 (2023). https://doi.org/10.1007/s00779-021-01604-6 14. Shrestha, A., Spezzano, F.: Online misinformation: from the deceiver to the victim. In: F. Spezzano, W. Chen, X. Xiao (eds.) ASONAM ’19: International Conference on Advances in Social Networks Analysis and Mining, Vancouver, British Columbia, Canada, 27-30 August, 2019, pp. 847–850. ACM (2019). https://doi. org/10.1145/3341161.3343536 15. Tambuscio, M., Ruffo, G., Flammini, A., Menczer, F.: Fact-checking effect on viral hoaxes: a model of misinformation spread in social networks. In: A. Gangemi, S. Leonardi, A. Panconesi (eds.) Proceedings of the 24th International Conference on World Wide Web Companion, WWW 2015, Florence, Italy, May 18-22, 2015 Companion Volume, pp. 977–982. ACM (2015). https://doi.org/10.1145/2740908. 2742572 16. Vo, N., Lee, K.: Learning from fact-checkers: analysis and generation of factchecking language. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’19, p. 335– 344. Association for Computing Machinery, New York, NY, USA (2019). https:// doi.org/10.1145/3331184.3331248 17. Vogel, I., Meghana, M.: Detecting fake news spreaders on twitter from a multilingual perspective. In: 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA), pp. 599–606. IEEE (2020) 18. Vosoughi, S., Roy, D., Aral, S.: The spread of true and false news online. Science 359(6380), 1146–1151 (2018). 10.1126/science.aap9559. https://www.science.org/ doi/abs/10.1126/science.aap9559 19. Zafarani, R., Zhou, X., Shu, K., Liu, H.: Fake news research: Theories, detection strategies, and open problems. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, pp. 3207–3208. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3292500.3332287 20. Zenodo: Covaxxy tweet ids data set. https://zenodo.org/record/5885700. Accessed 10 Jan 2022 21. Zhang, J., Luo, Y.: Degree centrality, betweenness centrality, and closeness centrality in social network. In: Proceedings of the 2017 2nd International Conference on Modelling, Simulation and Applied Mathematics (MSAM2017), pp. 300– 303. Atlantis Press (2017/03). https://doi.org/10.2991/msam-17.2017.68.https:// doi.org/10.2991/msam-17.2017.68 22. Zhou, X., Jain, A., Phoha, V.V., Zafarani, R.: Fake news early detection: a theorydriven model. Digital Threats 1(2) (2020). https://doi.org/10.1145/3377478
A Comparative Analysis of Information Cascade Prediction Using Dynamic Heterogeneous and Homogeneous Graphs Yiwen Wu(B) , Kevin McAreavey, Weiru Liu, and Ryan McConville University of Bristol, Bristol, UK {du20412,kevin.mcareavey,weiru.liu,ryan.mcconville}@bristol.ac.uk Abstract. Understanding information cascades in social networks is a critical research area with implications in various domains, such as viral marketing, opinion formation, and misinformation propagation. In information cascade prediction problem, one of the most important factors is the cascade structure of the social network, which can be described as a cascade graph, global graph, or an r-reachable graph. However, the majority of existing studies primarily focus on a singular type of relationship within the social network, relying on the homogeneous graph neural network. We introduce two novel approaches for heterogeneous social network cascading and analyze whether heterogeneous social networks have higher predictive accuracy than homogeneous networks, taking into account the potential differential effects of temporal sequences on the models. Further, our work highlights that the selection of edge types plays an important role in the accuracy of predicting information cascades within social networks. Keywords: Information Cascades · Heterogeneous social network · Homogeneous social network · Temporal Dynamics · Cascade Analysis
1
Introduction
The emergence of social media platforms, such as Twitter, Weibo and Instagram, has altered the way people connect with others and obtain information. These platforms facilitate the generation and rapid spread of messages, leading to the phenomenon of information cascade [1] – a process where individuals follow the behaviour of the preceding individuals without consideration of their own information. The cascade effect is ubiquitous that could also be observed in various domains, including academic paper citations. Predicting information cascades has become a crucial area of study to enable better control and acceleration of information dissemination, which also be used in many practical domains such as viral marketing [2], information safeguarding [3] and recommendations [4]. In the information cascade prediction problem, the potential information diffusion path in the future can be forecasted based on the current or historical cascade status. The diffusion path involves macro-level predictions [5] on overall cascade changes, such as popularity prediction, and micro-level [6,7] forecasts c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 168–179, 2024. https://doi.org/10.1007/978-3-031-53503-1_14
A Comparative Analysis of Information Cascade
169
about the next ‘infected’ user’s behavior or opinion adoption. Existing methods for information cascade prediction have devised various models, including generative models [5,8], feature-based methods [9,10], and deep learning methods [3,11]. Most of these methods emphasize the impact of temporal sequences and graph structures on prediction outcomes. Taking these important features forward into the domain of deep learning, most use Graph Neural Networks (GNNs) and Recurrent Neural Networks (RNNs). However, most of current research study homogeneous graphs, emphasizing the influence of specific cascade graphs or global graphs on final predictive outcomes. Yuan et al. [12] have extended work to heterogeneous level which effectively integrates multiple graph architectures and captures temporal subtleties. In order to evaluate the implications of temporal features and discerning between the roles of homogeneous and heterogeneous graph structures, our research involves four distinct models for popularity prediction problem. Firstly, we deploy the foundational homogeneous graph neural network as our initial point of reference for the study. Building upon this, the CasSeq- model where cascades are segmented into multiple snapshots processed through homogeneous graph neural networks to capture temporal dynamics with Long Short-Term Memory (LSTM) networks. Furthermore, we use the Heterogeneous Sample and aggregation (HetSAGE) technique, designed for heterogeneous graph neural network strategies to predict the popularity. Finally, the HetSeq- method is designed to capture sequential patterns within the cascade snapshots in diverse and complex heterogeneous information networks. Moreover, we utilize MuMiN, a rich multilingual and multimodal dataset of Twitter data [13], particularly focusing on the posts of misinformation. This dataset provides a heterogeneous information network of multiple edge and node types in a social network. Through investigating these methods and integrating them into our framework, we aim to gain valuable insights into the influence of these factors on the accuracy and effectiveness of information cascade prediction on heterogeneous graphs. The contributions of this paper are as follows: Firstly, we demonstrate that a static heterogeneous graph outperforms a static homogeneous graph, providing evidence for the superior performance of heterogeneous graph representations in the information cascade prediction problem without consideration of temporal sequences. Secondly, while temporal features significantly improve the performance of homogeneous graph models such as the CasSeq- model, they do not have a similar impact when integrated into heterogeneous frameworks. Lastly, we also identify the important edge relationships for the information cascade prediction problem at this task. Choosing the best heterogeneous graph structure significant influences the performance.
2
Related Work
Research in the domain of prediction of information items or cascades in social networks has witnessed a diverse array of models and methods. This section provides a short, focused overview of relevant approaches.
170
Y. Wu et al.
Feature-Based Models. Cascade prediction fundamentally relies on the interplay of various features that define an information cascade, including its content, the underlying diffusion topology, user attributes, and temporal patterns [14]. Certain attributes, especially those linked to users and temporal activity, have been observed to significantly influence prediction performance. Several studies have validated the effectiveness of features such as content topics or hashtags [15], as well as temporal and topological characteristics like tweet arrival rates and retweet relationships [16]. While these models provide explanatory insights into the prediction process, they often face scalability issues due to the handcrafted nature of features. Generative Models. A parallel strand of research employs generative models to emulate the temporal dynamics of information diffusion [17]. These models are particularly useful for their ability to capture and simulate the natural process of information spread. Models based on epidemic theories and stochastic point processes have been developed to explain and predict information cascades. The Poisson process and its variants, such as the self-exciting point process, are utilized in the study by [18]. Moreover, improvements to these models have been proposed, such as the integration of user influence metrics and decaying attention mechanisms in the Hawkes process to better predict tweet popularity [19]. Despite their interpretability, generative models are struggling with capturing intricate relationships among influential factors. Deep Learning Models. Popularity prediction has been significantly advanced by deep learning techniques [14]. Tools such as Graph Neural Networks (GNN), Recurrent Neural Networks(RNN), and Reinforcement Learning(RL) have played key roles, with graph representation learning standing out for its ability to learn from nearby data points. The DeepCas model [20], which took inspiration from DeepWalk [21], was a notable early effort in this area. CasCN [22] was developed to understand both the structure and the time-based patterns of cascade graphs. InfGNN [23] offers insights by considering both nearby and distant influences. DyHGCN [12] goes a step further by using layers to study node interactions based on both repost timelines and existing social connections. MUCas [24] particularly emphasizes the direction, order, and position in cascades, offering a detailed understanding. Other models like CoupledGNN [25] and Cascade2vec [26] have also contributed valuable insights to the field.
3 3.1
Methods Problem Definition
Throughout this study, we concentrate on the popularity prediction problem which is the macro-level information cascade prediction problem. Definition 1 (Popularity prediction). Given an information item i, the popularity prediction problem is to predict popularity Pi of item i at a future prediction time Tp based on the part cascade in observed time To , where Pi is interpreted as the total number of users participating in this cascade.
A Comparative Analysis of Information Cascade
171
Pi can be considered as a criterion for evaluating the popularity of a given information item, e.g., the number of retweets, replies, and citations. Our framework integrates four main models in information cascade problem: 1) Homogeneous graph neural networks model; 2) CasSeq- model; 3) Heterogeneous graph neural networks; 4) HetSeq- model. 3.2
Homogeneous Graph Neural Network Model
This model specifically focuses on the cascade graph, which is mirroring the information dissemination flow. Each node in the graph represents an individual user, characterized by their features. Definition 2 (Cascade graph). A cascade graph is a static social network denoted as G = (V, E) where V contains all participants of cascade and E represents the relationships such as retweeting or citing between participants. By incorporating these Graph Neural Networks (GNNs) models, including GCN [27], GAT [28], and GraphSAGE [29], into our framework, we can effectively capture the structural characteristics of the cascade graph, enabling better understanding and prediction of information cascade behavior. 3.3
CasSeq-Model
The CasSeq-model integrates the temporal feature by leveraging the sequence of cascade graphs. In this model, the concept of cascade snapshot is introduced, which captures both the topological information of the cascade and the node feature at a particular time. Definition 3 (Cascade Snapshot). A cascade snapshot refers to a specific instance of a cascade graph at a particular time, capturing the topology of the cascade and the state of its nodes. It is denoted as Gt , where Gt = (V t , E t , X t ). Here, V t represents the set of participants who have taken part, E t denotes the relations between participants, and X t represents the node features at time t during the observation. The model is comprised of three primary components: graph layer, long shortterm memory (LSTM) and Multi-layer perceptron (MLP). The graph layer is utilized to generate node embeddings that capture the structural characteristics of each cascade snapshot. In order to consider the temporal feature in the prediction of an information cascade, long short-term memory (LSTM) kernel is utilized to operate on information cascade sequences. The LSTM layer leverages the gating mechanism to selectively learn and remember previous information. After the sequence of snapshots has been processed by the LSTM layer to capture the temporal features, the resulting feature vector is fed into a MLP for final prediction. The MLP takes the temporal feature vector generated by the LSTM as input and outputs the predicted number of replies at the end of the observation window.
172
3.4
Y. Wu et al.
HetSAGE Model
HetSAGE (Heterogeneous Sample and Aggregation) model is designed to capture the features of heterogeneous information networks(HINs). Definition 4 (Heterogeneous information network). Let G = (V, E, X) be a static heterogeneous graph, which consisting of an entity set V = ∪m i=1 Vi of m types, a relation set E representing the spatial relations among entities, and a feature set X = ∪m i=1 Xi that Xi is the feature set for entity type Vi . Each type of entity can have a set of features that describe its properties or characteristics.
Fig. 1. Framework of HetSAGE model
The HetSAGE model as illustrated by Fig. 1 effectively predicts popularity by sampling and aggregating information from diverse node and edge types. Neighbour sampling is employed for computational efficiency by sampling a fixed number of neighbours for each node. The selection of a fixed number of neighbors enables the model to focus on a subset of relevant nodes and relations. Moreover, in order to combine information from different types of relations, we use the neighbour-level aggregation and edge-level aggregation in HetSAGE model. – Neighbour-level aggregator: After the neighbour sampling process, we get the node i’s neighbours Nr (i) of each type of relation r. According to the specified edge type r, the feature representation of node i, denoted as hli , will be updated: l hl+1 Nr (i) = N-aggregate({hj | j ∈ Nr (i)})
hl+1 = σ(W · concat(hli , hl+1 i Nr (i) )) where hl+1 Nr (i) represents the aggregated neighbor feature, and N-aggregate is the neighbour-level aggregator. – Edge-aggregator: The neighbour-level aggregation generates hli,r representing the aggregated feature of node i based on the specified relation r. To combine these features from different types of relations, E-aggregator is used to update the aggregated feature of node i: = E-Aggregator(hlr,i , ∀r ∈ E) hl+1 i
A Comparative Analysis of Information Cascade
173
where r is representing all types of edges that are connected to node i. Similar to N-aggregator, E-aggregator could also be ‘max’,‘min’, ‘mean’ or ‘stack’. 3.5
HetSeq-Model
HetSeq-model takes as input an observed cascade sequence within a given observation time window T , as well as the corresponding heterogeneous information network. The cascade sequence is generated using the concept of a heterogeneous snapshot, capturing both the temporal dynamics and the structural characteristics of the cascade. Definition 5 (Heterogeneous snapshot). To create a snapshot of a heterogeneous cascade, we combine the observed heterogeneous cascade graph at a given time t with the observation time window T , which could be defined as Gt = (V t , E t , X t ) that captures all the information in the given time t. This could create a sequence of heterogeneous cascade graphs denoted by G = (G(1) , . . . , G(n) ) where n represents the number of timestamps within the window, and G(t) represents the heterogeneous graph at timestamp t. The model is comprised of four primary components: HetSAGE layer, LSTM and MLP, which is similar to the ones used in the CasSeq model. The HetSAGE layer, which is a key component of the model, has a slightly different structure compared to the CasSeq-model. It introduces the HetSAGE layer to learn the node embeddings of heterogeneous information networks for each timestamps. Due to the need for training a separate HetSAGE layer for each cascade’s heterogeneous snapshot, the training process can become computationally demanding. To improve the training efficiency, we introduce the transfer learning [30] by leveraging a pre-trained model from one heterogeneous cascade graph without considering the temporal feature. Specifically, the HetSAGE layer is pre-trained according to the one heterogeneous cascade graph without consider timestamps, with output being popularity. Then, we transfer the hyperparameters of each relationship from the pre-trained model into the HetSAGE layer in HetSeq-model. This allows us to leverage the learned parameters and settings from the pre-training phase and apply them to the HetSeq-model for further training.
4
Experimental Setup
In this experiment, we utilize the MuMiN dataset [13], which is a multilingual graph of 21 million tweets related to information spread on Twitter. In our use of MuMiN dataset, the cascade graph captures the relation between a source tweet and its associated replies. The node features only reflect textual features. Heterogeneous information network is more detailed which is shown in Table 1, incorporating four primary node types and eight edge types. According to Fig. 2, the distribution of the number of replies and retweets in the MuMiN dataset follows a logarithm pattern. There is a large number of
174
Y. Wu et al. Table 1. Edge types and their associated nodes. Abbr
Edge type Connects
dis
discusses
Tweet → Claim
pos
posted
User → Reply
rep
reply to
Reply → Tweet
quo
quote of
Reply → Tweet
fol
follow
User → User
U men mentions
User → User
T men mentions
Tweet → User
ret
User → Tweet
retweeted
tweets with relatively few replies or retweets, while few tweets receive a higher number of replies or tweets. Therefore, MSLE (Mean Squared Logarithmic Error) can be an appropriate choice for both the evaluation metric and loss function. Additionally, we present the hyperparameters used for each of the four models under consideration: GNN, CasSeq-model, HetSAGE model, and HetSeq-model. For the GNN model, we used 4 heads in GAT method. The model was built with a three-layer GNN structure having dimensions of 512, 100, and 32 for each layer respectively. For the CasSeq-model, we used a two-layer graph with dimensions 512 and 32, and an output of 32. The LSTM has two layers that each layer contains 32 units. The dropout of LSTM and MLP is set as 0.5.
Fig. 2. The distribution of popularity of MuMiN dataset
For the HetSAGE model, we apply the ‘mean’ aggregation method in both node and edge levels and used two-layer heterogeneous layer. And the dimension of each layer is set as 1024 and the number of heads in GAT is set as 3. For HetSeq-model, the output dimension of heterogeneous layer is set as 32. And the LSTM in HetSeq-model is structured with two 32-unit layers.
A Comparative Analysis of Information Cascade
175
In all models, we utilize a learning rate of 0.001, train for 150 epochs with a batch size of 100, and all parameters are trained by the Adam optimizer.
5 5.1
Results Temporal Sequence in Heterogeneous vs. Homogeneous Graph
Our experiments led to several findings regarding the performance of models based on heterogeneous and homogeneous graphs with the consideration of temporal sequences, which are detailed in Table 2. Table 2. Performance comparison of various models and their GNN variations on the MuMiN-small and MuMiN-medium datasets. Methods
Model Type MuMiN-small MuMiN-medium
GNN
GCN 3.141 GAT 3.243 GraphSAGE 3.335
3.021 3.159 3.142
CasSeq-
GCN 1.232 GAT 1.462 GraphSAGE 1.373
1.117 1.325 1.2769
HetSAGE GCN 2.571 GAT 2.845 GraphSAGE 2.932
1.754 2.759 2.478
HetSeq-
4.120 4.532 4.298
GCN 4.443 GAT 4.817 GraphSAGE 4.727
The results show that models leveraging heterogeneous graphs achieved superior results compared to homogeneous graphs models without temporal features. Heterogeneous graphs capture more details over homogeneous graphs by using varied nodes and edges, which provide a clearer understanding of data interactions. This detailed representation allows the node embeddings derived from heterogeneous graph neural networks to be more representative and leading to robust predictions. However, when combined with temporal sequences, the CasSeq model, which is integrated with homogeneous graphs, outperforms the HetSAGE model and the HetSeq model that is combined with heterogeneous graphs. The introduction of a temporal sequence into homogeneous graph models enhanced their performance. This suggests that using these sequences helps understand how information popularity relates to time in these homogeneous models, which leads to better predictions. However, the introduction of temporal sequences into heterogeneous graph models failed to enhance performance. The complexity of heterogeneous graphs, where each node embedding is computed considering multiple neighboring relationships, might make it challenging to incorporate temporal
176
Y. Wu et al.
sequences. Moreover, the results show that GCN outperforms both GAT and GraphSAGE. This might be because most replies are directly in response to the source tweet in the MuMiN dataset. 5.2
Edge Selection in Heterogeneous Networks
In our extensive experiment, we considered a total of 128 combinations, drawn from eight distinct edge types, to understand the importance of each relationships in heterogeneous networks, as shown in Table 3. Moreover, since the label of ‘misinformation’ and ‘factual’ for tweets nodes is determined by the claims connected to them, in order to maintain label balance, the ‘dis’ relationship was not included in the combinations. Table 3. The top 10 predicted results for both MuMiN-small and MuMiN-medium datasets with their specific edge MuMiN-small msle edges Edge Combo
MuMiN-medium msle edges Edge Combo
1
2.6378 7
[dis, pos, rep, fol, T men, ret]
2.1975 8
2
2.6493 8
[dis, rep, fol, U men, T men, ret]
2.3505 7
[dis,pos, rep, fol, T men, ret, quo]
3
2.7076 6
[dis,pos, rep, fol, U men, ret]
2.3528 7
[dis, pos, rep, U men, T men, ret, quo]
4
2.7609 5
[dis, pos, rep, fol, ret]
2.3973 7
[dis,pos, rep, fol, U men, ret, quo]
5
2.7660 6
[dis, pos, rep, U men, T men, ret]
2.3991 7
[dis,pos, rep, fol, U men, T men, ret]
6
2.7785 5
[dis, pos, rep, T men, ret]
2.4675 5
[dis,pos, rep, ret, quo]
7
2.7792 6
[dis,pos, rep, fol, ret, quo]
2.4727 6
[dis, pos, rep, fol, ret, quo]
8
2.7871 8
[dis,pos, rep, fol, U men, T men, ret, quo] 2.4761 6
[dis,pos, rep, fol, T men, ret]
9
2.7965 5
[dis,pos, rep, ret, quo]
2.4770 6
[dis,pos, rep, T men, ret, quo]
[dis,pos, rep, U men, ret]
2.4802 6
[dis,pos, rep, U men, T men, ret]
10 2.8144 5
[dis,pos, rep, fol, U men, T men, ret, quo]
From Table 3, it is clear that merely increasing the number of edge relationships does not always lead to better predictive outcomes. Certain edge types—such as {dis, pos, rep, ret}—are consistently prominent, appearing in most top-performing combinations for both the MuMiN-medium and MuMiNsmall datasets. This suggests that it is not just the quantity, but the specific combination of edge types that determines performance. Based on the results in Table 3, we found that some edge types might have a greater influence on predicting popularity compared to others. Based on all combination results, we introduced the concept of Shapley value to assess the importance of various edge types, which is written as: φi =
S⊆N \{i}
|S|!(|N | − |S| − 1)! [v(S ∪ {i}) − v(S)] |N |!
(1)
where i represents an edge type; N is the set of all edge types; v is a mapping from the power set of N to MSLEs. Given that a lower MSLE indicates a better prediction result, a smaller Shapley value suggests greater importance for that particular edge type.
A Comparative Analysis of Information Cascade
177
Fig. 3. Shapley values for each edge type
From Fig. 3, we can observe that for the MuMiN dataset, the ‘pos’ edge plays a significant role in predicting the number of replies, as indicated by its lowest Shapley value. However, for the MuMiN-small dataset, the edges ‘ret’ and ‘rep’ are less significant, while they are relatively more important in the MuMiNmedium dataset. Additionally, when we examined the impact of these edges on predicting the number of retweets in MuMiN, the importance of the ‘rep’ relationship in the prediction was decreased, while relationships such as ‘T men’ and ‘quo’ became much more crucial. However, even when these edges are all considered in the heterogeneous graph structure, the prediction results might still be unsatisfactory. This suggests that these relationships are intricate, that selecting the optimal heterogeneous graph can greatly influence the outcomes.
6
Conclusions
In this work, we demonstrate the advantages of using heterogeneous over homogeneous social networks for analyzing information cascades, including when combined with temporal sequences. Through our study of established methods alongside the introduction of the HetSAGE and HetSeq models, we observed that models based on heterogeneous networks were clearly better at making accurate predictions without consideration of temporal sequences. However, while the heterogeneous information network offered enhanced performance, the introduction of sequence-based learning did not further augment outcomes. Moreover, our research emphasized that prediction quality is strictly tied to the choices of edge types in heterogeneous information networks, highlighting the importance of selecting edges for accurate cascade analysis and prediction.
References 1. Bikhchandani, S., Hirshleifer, D., Welch, I.: A theory of fads, fashion, custom, and cultural change as informational cascades. J. Polit. Econ. 100, 992–1026 (1992)
178
Y. Wu et al.
2. Subramani, M.R., Rajagopalan, B.: Knowledge-sharing and influence in online social networks via viral marketing. Commun. ACM 46(12), 300–307 (2003) 3. Wang, Y., Wang, X., Ran, Y., Michalski, R., Jia, T.: CasSeqGCN: combining network structure and temporal sequence to predict information cascades. Exp. Syst. Appl. 206(C) (2022). https://doi.org/10.1016/j.eswa.2022.117693 4. Wu, Q., Gao, Y., Gao, X., Weng, P., Chen, G.: Dual sequential prediction models linking sequential recommendation and information dissemination. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 447–457 (2019) 5. Wang, S., Zhou, L., Kong, B.: Information cascade prediction based on TDeepHawkes model. IOP Conf. Ser. Mater. Sci. Eng. 715(1), 012042 (2020). https://doi.org/10.1088/1757-899X/715/1/012042 6. Yang, C., Sun, M., Liu, H., Han, S., Liu, Z., Luan, H.: Neural diffusion model for microscopic cascade study. IEEE Trans. Knowl. Data Eng. 33(3), 1128–1139 (2019) 7. Yang, C., Tang, J., Sun, M., Cui, G., Liu, Z.: Multi-scale information diffusion prediction with reinforced recurrent networks. In: IJCAI, pp. 4033–4039 (2019) 8. Cao, Q., Shen, H., Cen, K., Ouyang, W., Cheng, X.: DeepHawkes: bridging the gap between prediction and understanding of information cascades. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1149–1158 (2017) 9. Jenders, M., Kasneci, G., Naumann, F.: Analyzing and predicting viral tweets. In: Proceedings of the 22nd International Conference on World Wide Web, pp. 657–664 (2013) 10. Kong, S., Mei, Q., Feng, L., Ye, F., Zhao, Z.: Predicting bursts and popularity of hashtags in real-time. In: Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2014, pp. 927–930. Association for Computing Machinery, New York (2014). https://doi.org/ 10.1145/2600428.2609476 11. Bo, H., McConville, R., Hong, J., Liu, W.: Social influence prediction with train and test time augmentation for graph neural networks. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021) 12. Yuan, C., Li, J., Zhou, W., Lu, Y., Zhang, X., Hu, S.: DyHGCN: a dynamic heterogeneous graph convolutional network to learn users’ dynamic preferences for information diffusion prediction. arXiv arXiv:2006.05169 (2020). https://api. semanticscholar.org/CorpusID:219559005 13. Nielsen, D.S., McConville, R.: MuMiN: a large-scale multilingual multimodal factchecked misinformation social network dataset. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 3141–3153 (2022) 14. Zhou, F., Xu, X., Trajcevski, G., Zhang, K.: A survey of information cascade analysis: models, predictions, and recent advances. ACM Comput. Surv. 54(2), 1–36 (2021). https://doi.org/10.1145/3433000 15. Petrovic, S., Osborne, M., Lavrenko, V.: RT to win! Predicting message propagation in Twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 5, no. 1, pp. 586–589 (2011) 16. Kamath, K.Y., Caverlee, J.: Spatio-temporal meme prediction: learning what hashtags will be popular where. In: Proceedings of the 22nd ACM International Conference on Information & Knowledge Management (2013). https://api. semanticscholar.org/CorpusID:2062983
A Comparative Analysis of Information Cascade
179
17. Wang, D., Song, C., Barab´ asi, A.-L.: Quantifying long-term scientific impact. Science 342(6154), 127–132 (2013) 18. Hassan Zadeh, A., Sharda, R.: Modeling brand post popularity dynamics in online social networks. Decis. Support Syst. 65, 59–68 (2014). https://www.sciencedirect. com/science/article/pii/S0167923614001432 19. Kobayashi, R., Lambiotte, R.: TiDeH: time-dependent Hawkes process for predicting retweet dynamics. In: Tenth International AAAI Conference on Web and Social Media (2016) 20. Li, C., Ma, J., Guo, X., Mei, Q.: DeepCas: an end-to-end predictor of information cascades. In: Proceedings of the 26th International Conference on World Wide Web, WWW 2017, pp. 577–586. International World Wide Web Conferences Steering Committee, CHE, Republic and Canton of Geneva (2017). https://doi.org/10. 1145/3038912.3052643 21. Perozzi, B., Al-Rfou, R., Skiena, S.: DeepWalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 701–710 (2014) 22. Chen, X., Zhou, F., Zhang, K., Trajcevski, G., Zhong, T., Zhang, F.: Information diffusion prediction via recurrent cascades convolution. In: 2019 IEEE 35th International Conference on Data Engineering (ICDE), pp. 770–781 (2019) 23. Wu, Y., Huang, H., Jin, H.: Information diffusion prediction with personalized graph neural networks. In: Li, G., Shen, H.T., Yuan, Y., Wang, X., Liu, H., Zhao, X. (eds.) KSEM 2020, Part II. LNCS (LNAI), vol. 12275, pp. 376–387. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-55393-7 34 24. Chen, X., Zhang, F., Zhou, F., Bonsangue, M.: Multi-scale graph capsule with influence attention for information cascades prediction. Int. J. Intell. Syst. 37(3), 2584–2611 (2022) 25. Cao, Q., Shen, H., Gao, J., Wei, B, Cheng, X.: Popularity prediction on social platforms with coupled graph neural networks. In: Proceedings of the 13th International Conference on Web Search and Data Mining (2019). https://api. semanticscholar.org/CorpusID:208309901 26. Huang, Z., Wang, Z., Zhang, R.: Cascade2vec: learning dynamic cascade representation by recurrent graph neural networks. IEEE Access 7, 144 800–144 812 (2019) 27. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) 28. Veliˇckovi´c, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017) 29. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems, vol. 30 (2017) 30. Bozinovski, S.: Reminder of the first paper on transfer learning in neural networks, 1976. Informatica 44(3) (2020)
A Tale of Two Cities: Information Diffusion During Environmental Crises in Flint, Michigan and East Palestine, Ohio Nicholas Rabb(B) , Catherine Knox, Nitya Nadgir, and Shafiqul Islam Tufts University, Medford, MA 02144, USA [email protected] Abstract. While information about some environmental crises rapidly spreads following the initial event, other crises, such as the Flint Water Crisis (FWC), take years to garner national attention. Understanding the spread of information from local to national scales is important, as this spread is often necessary to receive recognition and resources. One way to understand these dynamics is to use information diffusion models, which have been used to investigate information spread during crises. Though there have been several studies on why the FWC took so long to reach national attention, such modeling techniques have not been used to gain additional insights into factors that may contribute to this delay. To address this gap, our study uses an independent cascade diffusion model to examine the information spreading dynamics of two environmental crises: the FWC and the train derailment in East Palestine, Ohio. Our results demonstrate how standard independent cascade models, despite adequately capturing a fast-spreading crisis, may not be sufficient to explain the dynamics of a delayed diffusion. Flint’s dynamics were only captured by manipulating the contagiousness parameter throughout the simulation, implying that social factors hindered its ability to fit the diffusion paradigm of unimpeded spread. Keywords: Information diffusion modeling
1
· environmental crisis · agent-based
Context and Motivation
Addressing an environmental crisis before it becomes a societal disaster is often highly complex due, in part, to uncertainty and the diversity of stakeholders involved. In such situations, reliable information must be shared in the face of hurdles (suppression, conflicting messages, and denial from officials), requiring an understanding of the barriers to information spread. While quantitative studies of information diffusion often analyze the aftermath of crises, there has yet to be a detailed understanding of the dynamics of information spread during these events. For example, why did it take over two years for the Flint Water Crisis to get national attention while the train derailment in East Palestine grabbed national attention in less than two weeks? c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 180–191, 2024. https://doi.org/10.1007/978-3-031-53503-1_15
Information Diffusion During Environmental Crises
181
Our paper addresses this disparity by analyzing two cases of different information diffusion dynamics: the Flint Water Crisis (FWC) and East Palestine (EP), Ohio train derailment. The FWC exposed many residents in the city to elevated levels of lead and other contaminants, leading to a public health crisis and an outbreak of Legionnaire’s disease that killed 12 individuals [5]. The crisis took two years to reach national attention and be addressed. For comparison, the train derailment in East Palestine reached peak national attention in two weeks. This crisis resulted in chemicals spilled into local waterways and spread throughout the air during an in-situ burn of vinyl chloride [16]. While the FWC has been studied from different perspectives including legal [10], water quality [15], and social justice [2,9,13], less attention has been paid to the dynamics underlying this delayed information spread. We seek to model the spread of information about Flint and East Palestine to understand the mechanisms behind such environmental crises using an independent cascade model. While we are able to successfully identify multiple parameterizations that closely reproduced the East Palestine data, we were unable to reliably statically parameterize the model so its outputs corresponded to the spread of the FWC. However, we found that one way to accurately mimic the spread of information in the FWC is to increase the probability of message spread dynamically at the time when the city of Flint declared a state of emergency – not a common procedure in the static modeling paradigm used in most diffusion modeling research [20,23,24,26]. Our results show that while certain crises can be accurately modeled with a static set of parameters in a diffusion model, crises that undergo long delays or drastic changes in the conditions of spread may require additional mechanisms that correspond to the social processes involved in the crisis – for example, mechanisms of suppression. Although information diffusion events and their models have been inspired by the spread of diseases, our results suggest that underlying assumptions related to disease spread are not directly applicable to models of environmental crises. Instead, these models must adjust to incorporate complexity caused by human behavior, rather than the simplified behavior of particles or disease.
2 2.1
The Informational Nature of Environmental Crises Environmental Crises in Flint, MI and East Palestine, OH
The FWC, one of the most well-known environmental justice cases in the U.S., exposed residents to lead-contaminated drinking water for nearly two years. In April 2014, the city of Flint was put under emergency management to cut costs, reducing citizens’ democratic control of their city. The emergency manager switched Flint’s water source without fully examining the consequences of the switch [10], and soon, Flint residents started noticing strong odors and discoloration of the water, as well as illnesses and mysterious rashes. For nearly two years, residents organized to collect data, attend town hall meetings, and provide resources to their struggling neighbors, while local government officials
182
N. Rabb et al.
attempted to suppress public concerns about the emerging water crisis [13,22]. Eventually, local organizers joined forces with Marc Edwards’ lab at University of Virginia and Dr. Mona-Attisha, a pediatrician [7,19], to bolster their credibility with scientific results: many residents in Flint had experienced elevated lead levels in their blood for nearly two years, risking serious neurological damage [5,19,22]. These findings, in conjunction with the declaration of a state of emergency in Flint, catapulted Flint into the eyes of mass media, resulting in a rapid uptick of reporting on the crisis (Fig. 1).
Fig. 1. A comparison of media coverage (data collected with Media Cloud) of each of the selected crises overlaid with Google Trends data.
In contrast to the lengthy journey in Flint, the February 2023 train derailment and subsequent environmental crisis in East Palestine, Ohio was a crisis characterized by a rapid spread of information. In East Palestine, a Norfolk Southern train carrying chemicals derailed, resulting in an initial fire and spill of car contents. An evacuation order was in place by the 5th day following the derailment, and on the 6th day, authorities intentionally burned off the vinyl chloride to reduce the chances of an explosion occurring. Images of massive smoke clouds were captured and spread across both traditional and social media. Residents experienced illnesses related to chemical exposure and were dissatisfied with the scale of testing for chemical residue. Public interest peaked on February 14 with the first uptick in media attention, approximately two weeks after the crisis began. Figure 2 below illustrates the discrepancy between the two crises using Google Trends search data and number of news articles published on each event provided by Media Cloud. 2.2
The Complex Nature of Information Diffusion
Events of both the FWC and the derailment in East Palestine show how environmental crises are complex manifestations of sociotechnical systems; crumbling infrastructure may initiate a crisis, yet sociopolitical forces will govern how information is disseminated. In the case of East Palestine, information spread quickly and steps were taken to address the disaster, while Flint’s crisis was suppressed for years [9,21] until residents’ actions thrust the story into national spotlight [3,6,11,17]. Understanding the dynamics of information diffusion is crucial to
Information Diffusion During Environmental Crises
183
improving crisis mitigation interventions and strategies, and in cases like Flint, understanding how to secure outside aid. One method to do so is to use information diffusion models, which have been used to learn about the spread of ideas (for a comprehensive review, see [12] and [25]), innovations or products, and social group behaviors. Some information diffusion studies have also modeled information spread during crises, including train accidents [23], local assaults [20], hurricanes [24], and COVID-19 [26]. Due to the epidemiological roots of information diffusion models [4], many of these crisis diffusion models are simple, using either differential equations [23,26] or discrete state models [8,24] to track spread in a manner akin to an epidemiological Susceptible Infected Recovered (SIR) model. Regardless of implementation, many model information spreading as independent cascades [12], where the spread of information from one individual to others is chiefly governed by a static spreading probability p.
3
Research Methods
To examine the dynamics behind the spread of information during the FWC and train derailment in East Palestine, we used an independent cascade model to simulate information spread from a small, local community to a larger population through a social network. Using a randomly generated network structure of media and citizens in the U.S., we investigate how simple probabilistic rules for information diffusion can imitate the dynamics of these crises. Our model’s outputs were compared with Google Trends data for both crises to find which model parameters best reproduced the real crises. Below, we briefly describe our model, observed data, and analysis techniques, while the full model code, search data, and replication instructions are available publicly at https://github.com/ ricknabb/flint-media-model. We use weekly Google Trends data as a surrogate for weekly information spread for both the FWC and the train derailment in East Palestine. The observed data provides a scaled number of Google searches for a topic between 0 and 100 on a weekly basis. We used “flint water” as the search term for Google Trends data corresponding to the FWC, and began our timeline when the water source was changed. For the East Palestine case, we used the term “East Palestine” as the Google search term, as the crisis was not only limited to drinking water contamination. 3.1
Information Diffusion Model
Base Independent Cascade Model. To simulate the spread of information about environmental crises, we used an independent cascade agent-based model, commonly used for information diffusion modeling [25]. Below we also describe a modified cascade model with extra parameters governing spreading between different types of agents. As is common with these models, they are parameterized by certain variables, which can be changed to alter the way information
184
N. Rabb et al.
spreads. Here we describe each model, its key parameters, and which parameter combinations we tested to yield different spreading patterns.
Fig. 2. Methods of information diffusion in the base independent cascade model (a) and the heterogeneous model (b).
The model features a graph G = (V, E) of nodes (hereafter referred to as agents) connected by edges as in a social network. Graphs G are generated randomly by the Barab´ asi-Albert preferential attachment process [1], which generates a network with properties similar to human social networks, and is parameterized by m [14]. We model two types of agents in the graph: citizen agents and media agents. As the information diffusion progresses, agents with state b = 1 (representing that the agent has knowledge of the crisis) have the chance to change their b = 0 (no knowledge of crisis) neighbors in the network to b = 1, with probability p, which can be considered the “contagiousness” of the story. Parameters combinations tested are shown in Table 1. On each random graph, well-connected groups of citizen nodes are identified using the Louvain community detection algorithm. One small group of citizen agents is chosen to be the affected community, beginning with b = 1. All other agents begin with b = 0. Each identified community is also connected to a media agent. Since community size varied, small communities could be considered to have connected to “local” media, and larger communities to “national” media. A simplified visualization of this network structure and the probability of passing messages in the base model is shown in Fig. 2(a). Heterogeneous Model. Due to initial observation of poor alignment to the data from the FWC, we additionally created a model variation where the base spreading probability p is modified according to whether the information is being spread from a citizen to a citizen, a citizen to a media agent, or a media agent to a citizen. This modification was made to explore if changing the probability of spread depending on a node’s status as a media or citizen agent would
Information Diffusion During Environmental Crises
185
provide a closer replication of the FWC. All other model mechanisms are the same as the base model above. To model different types of influence, we created parameters pcc (0 ≤ pcc ≤ 1), pcm (0 ≤ pcm ≤ 1), and pmc (0 ≤ pmc ≤ 1) to represent probability modifiers of the spreading event occurring, but between different agent types (citizen-to-citizen, citizen-to-media, and media-to-citizen, respectively), with tested parameter values shown in Table 1. These parameters can roughly represent the believability or trustworthiness between fellow citizens, and between media and citizens. Table 1. The models were simulated on 5 different random graphs with 30 simulations per graph and parameter combination. Parameter Meaning
Base model
Het. model
p
The probability of one agent spreading information to another
0.01, 0.05, 0.1, 0.25, 0.5, 0.75
0.01, 0.05, 0.1, 0.25, 0.5, 0.75
m
The number of connections each agent makes upon being added to the random graph
1, 2, 3, 5,
3, 10
pcc
A multiplicative probability modifier on p when citizens are spreading to citizens
N/A
0.01, 0.05, 0.1, 0.25, 0.5, 0.75
pcm
A multiplicative probability modifier on p when citizens are spreading to media
N/A
0.01, 0.05, 0.1, 0.25, 0.5, 0.75
pmc
A multiplicative probability modifier on p when media are spreading to citizens
N/A
1
3.2
Simulating Information Spread to Mimic Observed Crises
While each model had some differences, detailed below, both shared several experimental procedures. To test each model, we varied initial parameters and ran the information spread process on a graph of 300 citizen agents for 114 time steps (the length of the FWC data). Thus, in our model, each time step represents a week. For each unique parameter combination, 5 different random graphs were tested. Each graph was generated according to the process described above, including media connection to detected communities, and setting the affected community as one where the size of the community roughly equals 0.5% of the number of citizen agents so as to capture a crisis emerging from a small community. Each random graph and parameter combination was simulated 30 times. Because the information spread process is stochastic, running several random graphs each for 30 trials allowed our results to tend towards the most prevalent behavior. Our model records the number of new beliefs (agents who went from b = 0 to b = 1) at each time step, giving a comparable measure of information over time, which was normalized with the same methodology Google Trends uses, allowing for direct overlay of simulation with Google Trends data. This then revealed which model parameter settings best recreated the known information spreading pattern from the two crises.
186
N. Rabb et al.
Using these diffusion models to recreate the patterns of information spread from the FWC and East Palestine train derailment, we can analyze the conditions that may have been underlying each crisis: both the social network structure, the contagiousness of the story, and the relative willingness to spread information between citizens and media. The thousands of simulation results were analyzed and compiled into several key takeaways that shed light on the nature of each crisis, and hint at why Flint and East Palestine would have such distinct information spread patterns.
4 4.1
Results and Findings Patterns Simulate East Palestine Better Than Flint
The results from the base model and heterogeneous model revealed that while we were successful at capturing behavior that aligned to the information spread in East Palestine, we found poor alignment with the Google Trends data for the FWC, with results that rarely captured similar diffusion patterns. Additionally, for the parameter combinations that had any simulation results align closely to Flint data, only a handful of these matched out of the hundreds of simulations (see Fig. 3). Each simulation for a given parameter combination is stochastic, so results can vary significantly from simulation to simulation even with the same parameters on a single graph structure. Table 2. Five sets of parameters were selected for their ability to create results similar to observations. 1000 simulations were run on one graph initialized to these parameters and analyzed for the ability to capture the time of maximum diffusion. Case
Experiment name p
m pcm pcc
Percent match
Flint
MC1 MC2 MC3 MC4 MC5
0.75 0.75 0.75 0.5 0.5
10 10 10 10 10
0.01 0.01 0.01 0.1 0.01
0.75 0.5 0.25 0.25 0.5
2.6 0.4 0.4 0 0.7
East Palestine MC1 MC2 MC3 MC4 MC5
0.75 0.75 0.75 0.75 0.5
10 10 10 10 10
0.75 0.5 0.75 0.75 0.75
0.75 0.75 0.5 0.25 0.75
100 99.5 99.9 98.6 97.7
While our information diffusion model is most often able to capture the dynamics of the East Palestine crisis, some rare parameter combinations did yield simulations that matched the Flint data. To explore whether this was a signal or a noisy consequence of the stochastic spread process, we conducted
Information Diffusion During Environmental Crises
187
follow-up analysis for both the Flint and East Palestine cases. For each crisis, the five heterogeneous model parameter combinations that resulted in the closest alignment to the timing of the peak of Google Trends data were analyzed further via Monte Carlo simulation. Each Monte Carlo analysis included 1000 simulations on one randomly generated graph structure with the same initial affected community, with no variations in parameters, so that results could be used to compare variation only caused by the stochastic nature of the information spread. For each analysis, in Table 2, we report the number of times the simulated results match the time of peak spread from the empirical data. A match is counted if the particular Monte Carlo trial’s peak is within five time steps of the empirical peak time. Each Monte Carlo trial listed in Table 2 was visualized in Fig. 3 below. These results indicate that we reached nearly 100% success rate in the parameter combinations selected for their ability to capture the dynamics of the East Palestine crisis, while we were unsuccessful at capturing the FWC – only matching Flint data at most in 2.6% of trials.
Fig. 3. Visualizations of the five Monte Carlo experiments per crises. Parameters for each experiment can be found using the experiment name and comparing to Table 2.
4.2
Reproduction of Dynamics in the Case of Flint
The results from simulations thus far reveal that while our independent cascade model is sufficient to mimic the behavior of information spread of a crisis where the majority of the information spread occurs rapidly after the crisis occurs, like East Palestine, the model is unsuccessful at replicating a crisis where there is a significant time delay before the information is spread. However, these experiments do not consider the possibility of changing spreading parameters over time, rather assuming that information “contagiousness” is static. To test a variation where information contagiousness changes, we ran simulations with
188
N. Rabb et al.
manual manipulation of the contagion variables (p, pcc , and pcm ) at a time step that matched real events in the FWC timeline. We ran two additional Monte Carlo simulations with manipulations: the contagion variables started at p = 0.05, pcc = 0.01, pcm = 0.01, the simulation ran for 87 time steps, and then at t = 88 (corresponding to the declaration of a state of emergency in Flint), we augmented the contagion variables to p = 0.75, pcc = 0.75, pcm = 0.75 and ran the simulation for the remaining 26 time steps. The state of emergency contributed to national mainstream media outlets picking up the story, and lending it more legitimacy. With this manual adjustment of information spread parameter, we found that simulation results matched the Flint peak time with little variance, with 99.9% of the simulations reaching the maximum number of new agents within 5 time steps of the Google Trends data. Unlike the previous Monte Carlo simulations, we ran 1000 simulations on a single graph structure and then ran one simulation on 1000 different randomly generated graph structures, allowing for us to explore if the behavior is dependent on the initial graph structure and selection of the agents representing the Flint community or if it is stable regardless. Results from this analysis are displayed in Fig. 4, illustrating that both methods were highly successful at imitating spreading dynamics in Flint. Results from this simple manipulation indicate that an independent cascade model, in fact, can capture the dynamics of a late peak, but requires dynamic changes in contagion parameters. While it may seem an obvious, or seemingly facile intervention, the implications of this change in dynamics of information diffusion from traditional static modeling is likely to have significant practical implications regarding our understanding of information spread for environmental crises and subsequent interventions that are further discussed below.
Fig. 4. Simulations of runs with the manual adjustment to better imitate information diffusion in the case of Flint on a single graph structure and with random graph generation.
Information Diffusion During Environmental Crises
5
189
Discussion
Our model’s difficulty recreating Flint’s dynamics of delayed, yet widespread diffusion of information in the model reveals challenges to the typical information diffusion paradigm. Whereas the models with a static “contagiousness” parameter easily reproduced the information diffusion from the EP train derailment, Flint was only reproducible with a dynamic contagiousness parameter. Typical information diffusion modeling imagines an uninhibited flow of information through a population. In the case of EP, this seems like a plausible assumption. In Flint’s case, information did not spread in this matter; conditions that allowed for uninhibited spread were not present. Unlike the diseases that inspired information diffusion modeling, information spread is influenced dynamically by sociopolitical factors and conditions, making static contagiousness a poor assumption. Our failure to capture the dynamics of Flint indicates that we may need to alter our methods or tools when the conditions that lead to delayed and then rapid, widespread diffusion are not present. A number of characteristics of the FWC reflect the need for additional mechanisms or dynamic parameterizations. Flint stands as a case where the inherent contagiousness of the event was suppressed until certain conditions changed due to the ongoing actions of local residents to bring attention to the crisis: first the involvement of Dr. Marc Edwards and Dr. Mona-Attisha [7,18,19], but more significantly the declaration of a state of emergency in Flint [6,9,13]. These events led to national media coverage, whereas previously, only local news covered the crisis. The actions taken by residents, as well as the suppression that they overcame, are part of the dynamics of information spread, even if not directly modeled by independent cascade diffusion models. While our model is limited by the assumptions and simplifications necessary to build a simple model of a complex phenomenon, adding slightly more complicated model mechanics may lead to better alignment with crises like the FWC. Models with antagonistic forces, lowering the contagiousness of information, could capture the Flint community’s suppression by local government officials, who often denied and dismissed the problem [2,9,13]. Varying the contagiousness parameter based on who has spread it (e.g. if a higher degree node spreads a message, it is viewed as more legitimate or worth spreading), as was the case when Marc Edwards and Dr. Mona-Attisha legitimized the claims of Flint residents [7,18,19], could also be a useful advance in these models. Similar models may be more successful at capturing delayed spread phenomena by adding these mechanisms affecting information diffusion that are grounded in the known social phenomena or interactions that occurred throughout the lifetime of a crisis. Future studies should direct efforts to identify which of these mechanisms may be successful and if they depend on the characteristics of the community selected to represent the crisis location. Additional cases should also be considered, as they may reveal other necessary mechanisms to account for differing patterns of information diffusion.
190
6
N. Rabb et al.
Conclusion
As the frequency of water crises will likely increase across the U.S., we must understand how to garner national attention so communities can receive federal resources to mitigate disaster. We used an independent cascade model to replicate information diffusion across two crises characterized by differences in the time between crisis onset and reaching national attention. Our results show that while the model was able to simulate patterns of information spread similar to the train derailment in East Palestine, we did not have success capturing dynamics of the Flint Water Crisis (FWC). The FWC model required manual intervention to increase the probability of message spread at a key point during the simulation that matched Flint’s declaration of emergency. These findings suggest that independent cascade models with static spreading probabilities are limited and that additional model mechanisms are needed to successfully model certain diffusion events. They also suggest that suppression played a role in the FWC, prohibiting the free flow of information that may be present in crises that resolve quickly, and that residents’ active efforts to break that suppression were crucial for garnering national attention. Acknowledgements. The authors thank NSF-NRT 2021874 for support.
References 1. Barab´ asi, A.-L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999) 2. Butler, L.J., Scammell, M.K., Benson, E.B.: The Flint, Michigan, water crisis: a case study in regulatory failure and environmental injustice. Environ. Justice 9(4), 93–97 (2016) 3. Carey, M.C., Lichtenwalter, J.: Flint can’t get in the hearing: the language of urban pathology in coverage of an American public health crisis. J. Commun. Inq. 44(1), 26–47 (2020) 4. Centola, D., Egu´ıluz, V.M., Macy, M.W.: Cascade dynamics of complex propagation. Physica A Stat. Mech. Appl. 374(1), 449–456 (2007) 5. Denchak, M.: Flint water crisis: Everything you need to know. Technical report, November 2018 6. Matsa, K.E., Mitchell, A., Stocking, G.: The flint water crisis. Technical report, Searching for news (2017) 7. Hanna-Attisha, M., LaChance, J., Sadler, R.C., Schnepp, A.C.: Elevated blood lead levels in children associated with the flint drinking water crisis: a spatial analysis of risk and public health response. Am. J.. Pub. Health 106(2), 283–290 (2016) 8. Hui, C., Tyshchuk, Y., Wallace, W.A., Magdon-Ismail, M., Goldberg, M.: Information cascades in social media in response to a crisis: a preliminary model and a case study. In: Proceedings of the 21st International Conference on World Wide Web, pp. 653–656 (2012) 9. Jackson, D.Z.: Environmental justice? Unjust coverage of the flint water crisis. Shorenstein center on media, politics and public policy (2017) 10. Jacobson, P.D., Boufides, C.H., Chrysler, D., Bernstein, J., Citrin, T.: The role of the legal system in the Flint water crisis. Milbank Q. 98(2), 554–580 (2020)
Information Diffusion During Environmental Crises
191
11. Jahng, M.R., Lee, N.: When scientists tweet for social changes: dialogic communication and collective mobilization strategies by Flint water study scientists on Twitter. Sci. Commun. 40(1), 89–108 (2018) 12. Kiesling, E., G¨ unther, M., Stummer, C., Wakolbinger, L.M.: Agent-based simulation of innovation diffusion: a review. CEJOR 20(2), 183–230 (2012) 13. Krings, A., Kornberg, D., Lane, E.: Organizing under austerity: how residents’ concerns became the flint water crisis. Crit. Sociol. 45(4–5), 583–597 (2019) 14. Leskovec, J., Faloutsos, C.: Scalable modeling of real graphs using Kronecker multiplication. In: Proceedings of the 24th International Conference on Machine Learning, pp. 497–504 (2007) 15. Martin, R.L., Strom, O.R., Pruden, A., Edwards, M.A.: Interactive effects of copper pipe, stagnation, corrosion control, and disinfectant residual influenced reduction of legionella pneumophila during simulations of the flint water crisis. Pathogens 9(9), 730 (2020) 16. Messenger, N.: An overview of the Norfolk Southern Train derailment and hazardous chemical spill in East Palestine, Ohio, February 2023 17. Moors, M.R.: What is Flint? Place, storytelling, and social media narrative reclamation during the flint water crisis. Inf. Commun. Soc. 22(6), 808–822 (2019) 18. Pieper, K.J., et al.: Evaluating water lead levels during the flint water crisis. Environ. Sci. Technol. 52(15), 8124–8132 (2018) 19. Pieper, K.J., Tang, M., Edwards, M.A.: Flint water crisis caused by interrupted corrosion control: investigating “ground zero” home. Environ. Sci. Technol. 51(4), 2007–2014 (2017) 20. Sung, M., Hwang, J.-S.: Who drives a crisis? The diffusion of an issue through social networks. Comput. Hum. Behav. 36, 246–257 (2014) 21. Takahashi, B., Adams, E.A., Nissen, J.: The flint water crisis: local reporting, community attachment, and environmental justice. Local Environ. 25(5), 365–380 (2020) 22. Toral, M.: High lead levels in Flint, Michigan - Interim report. Memorandum, Environmental Protection Agency, June 2015 23. Wei, J., Bing, B., Liang, L.: Estimating the diffusion models of crisis information in micro blog. J. Informet. 6(4), 600–610 (2012) 24. Yoo, E., Rand, W., Eftekhar, M., Rabinovich, E.: Evaluating information diffusion speed and its determinants in social media networks during humanitarian crises. J. Oper. Manag. 45, 123–133 (2016) 25. Zhang, H., Vorobeychik, Y.: Empirically grounded agent-based models of innovation diffusion: a critical review. Artif. Intell. Rev. 52(1), 707–741 (2019) 26. Zhang, M., Qin, S., Zhu, X.: Information diffusion under public crisis in BA scalefree network based on SEIR model—taking COVID-19 as an example. Phys. A 571, 125848 (2021)
Multilingual Hate Speech Detection Using Semi-supervised Generative Adversarial Network Khouloud Mnassri(B) , Reza Farahbakhsh, and Noel Crespi Samovar, Telecom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France {khouloud.mnassri,reza.farahbakhsh,noel.crespi}@telecom-sudparis.eu
Abstract. Online communication has overcome linguistic and cultural barriers, enabling global connection through social media platforms. However, linguistic variety introduced more challenges in tasks such as the detection of hate speech content. Although multiple NLP solutions were proposed using advanced machine learning techniques, data annotation scarcity is still a serious problem urging the need for employing semisupervised approaches. This paper proposes an innovative solution—a multilingual Semi-Supervised model based on Generative Adversarial Networks (GAN) and mBERT models, namely SS-GAN-mBERT. We managed to detect hate speech in Indo-European languages (in English, German, and Hindi) using only 20% labeled data from the HASOC2019 dataset. Our approach excelled in multilingual, zero-shot cross-lingual, and monolingual paradigms, achieving, on average, a 9.23% F1 score boost and 5.75% accuracy increase over baseline mBERT model. Keywords: Hate Speech · offensive language GAN · mBERT · multilingual · social media
1
· semi-supervised ·
Introduction
Social media platforms like Twitter and Facebook have been growing in popularity in recent years as means of communication and connection. Unfortunately, an increasing concern has been illustrated, along with this expansion, that many people have reported encountering hate speech and offensive content on these platforms [1]. In fact, due to the anonymity provided by these tools, users are becoming more free to express themselves, and sometimes engaging in hateful actions [2]. In addition, offensive content is no longer restricted to human scripting, but it’s crucial to acknowledge that Generative AI and Large Language Models (LLMs) can also generate it, which emphasizes further the need for robust content moderation. Moreover, due to the enormous volume of multilingual content spread online, it has become more difficult to manually regulate it. However, there have been several initiatives to automate the detection of hateful and offensive content in multilingual settings, which remains a challenging task [3]. Indeed, most of the existing machine learning solutions (monolingual and c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 192–204, 2024. https://doi.org/10.1007/978-3-031-53503-1_16
Multilingual Hate Speech Detection Using Semi-supervised GAN
193
multilingual) have used supervised learning approaches [3], where transfer learning techniques, based on pre-trained Large Language Models LLMs, have proven to give outstanding results. In fact, Transformer-based architectures, such as BERT (Devlin et al., 2019 [4]), have been demonstrated to achieve state-of-theart performance in a variety of hate speech detection tasks. As a result, a large number of BERT-based approaches were presented in this field [5–8] etc. Moreover, multilingual transformers, particularly mBERT (multilingual BERT), have been implemented in the multilingual domain. This model has provided cuttingedge performance in cross-lingual and multilingual settings, where several studies demonstrate its usefulness in many languages especially in low-resource ones [9] etc. While these approaches have made remarkable advances, they still have difficulties obtaining enough annotated data, which is further complicated in multilingual hate speech detection tasks. More specifically, acquiring such highquality labeled corpora is expensive and time-consuming [10]. Adding to that, multilingual robust models often depend on enormous linguistic resources, which are mostly available in English (as a rich-resource language). As a result, these models encounter generalization issues that yield decreased performance when used with low-resource languages [11]. As a solution for these deficiencies, Semi-Supervised SS-Learning was introduced in order to decrease the need for labeled data. It enables building generalizable efficient models with unlabeled corpora using only small-sized of annotated samples. Thus, SS-Learning was largely used in NLP for hate speech detection tasks [12,13]. One of these SS techniques is Generative Adversarial Network (GAN) [14], which is based on an adversarial process, where a “discriminator” learns to distinguish between real and generated instances, produced by a “generator” that mimics data based on a distribution. An extension of GANs is Semi-Supervised SS-GANs, where the “discriminator” also allocates a class to each data sample [15]. It becomes a remarkable solution in semi-supervised learning in hate speech detection, widely used combined with pre-trained Language models like SS-GAN-BERT [16] (non-English language). In this paper we propose a semi-supervised generative adversarial framework, in which we incorporate mBERT for multilingual hate speech and offensive language detection, and we hereby refer to the introduced model as SS-GANmBERT. This procedure leverages mBERT’s ability to generate high-quality text representations and to adapt to unlabeled data, contributing to enhancing the GAN’s generalization for hate speech detection in multiple languages. Even though GAN-BERT has been utilized for different non-English languages in NLP, the semi-supervised GAN-mBERT approach remains underexplored specially in multilingual hate speech detection. Therefore, this study aim to fill this gap by proposing the SS-GAN-mBERT model for hate speech and offensive language detection across English, German, and Hindi. The key contributions are as follows: – We proposed an SS-GAN-mBERT model, in multilingual and cross-lingual settings, and we compared with baseline semi-supervised mBERT, evaluating the impact of adopting GAN on improving pre-trained models’ performance.
194
K. Mnassri et al.
– Training across three scenarios: multilingual, cross-lingual (zero-shot learning), and monolingual, in order to examine linguistic feature sharing within Indo-European languages and prove their crucial role in enhancing text classification tasks. – Exploration of SS-GAN’s progressive influence in improving performance through iterative labeled data increase in a multilingual scenario.
2 2.1
Literature Survey GAN for Hate Speech Detection
In order to address the challenge of imbalance labeling with hateful tweets, Cao et al. [17] (2020) presented HateGAN, a deep generative reinforcement learning network. Inspired by Yu et al. (2017) [18] (SeqGAN), their reinforcement learning-based component encourages the generator to produce more hateful samples in English by introducing a reward policy gradient to direct its generation function. Their results indicate that HateGAN enhances hate speech identification accuracy. Although their contribution in implementing reinforcement learning, there wasn’t a detailed explanation of its influence on the model’s performance, nor a significant improvement in the results. Therefore, we won’t consider this method in our approach for the moment. 2.2
GAN-BERT
GAN-BERT was first introduced by Croce et al. [19] (2020) as a viable solution to deal with the lack of annotated data. They’ve seen that using semi-supervised learning could be beneficial in this case in order to improve the generalization performance within the availability of little amount of labeled data. As a result, they proposed GAN-BERT, an extension of BERT model combined with generative adversarial network and fine-tuned on labeled and unlabeled data. They implemented their model on several classification datasets, and they found that the performance of their semi-supervised model gets better every time increasing the size of labeled dataset. Moreover, Jiang et al. [20] used CamemBERT, and ChouBERT in order to build GAN-BERT models. They also worked on examining varied losses over changing the number of labeled and unlabeled samples in the training French datasets in order to provide greater understanding into when and how to train GAN-BERT models for domain-specific document categorization. Adding to that, Jain et al. [21] worked on consumer sentiment analysis using GAN-BERT within aspect fusion. They extracted several service features from consumer evaluations and merged them with word sequences, before feeding them into the model. 2.3
GAN-BERT for Hate Speech Detection
Ta et al. [22] handled the Detection of Aggressive and Violent INCIdents from Social Media in Spanish (DAVINCIS@IberLEF2022). In order to increase the
Multilingual Hate Speech Detection Using Semi-supervised GAN
195
dataset size, they used back translation for data augmentation, implementing the models of Helsinki-NLP. By translating the original tweets in Spanish to English, French, German, and Italian, then translating them back to English to be used in the BERT-based model, they managed to balance the dataset and fill the violent label deficiency. Moreover, working on Bengali both hate speech and fake news detection, Tanvir et al. [16] used Bangla-BERT based GAN-BERT model. They compared its performance with Bangla-BERT baseline, to interpret the benefit of implementing GAN, especially on a small amount of data samples. In addition, Santos et al. [23] proposed an ensemble of two semi-supervised models in order to automatically generate a hate speech dataset in Portuguese with reduced bias. The first model incorporates GAN-BERT network, where they used Multilingual BERT and BERTimbau, while the second model is based on label propagation to propagate labels from existing annotated corpora to unlabeled dataset. Overall, the existing hate speech detection methods based on GAN-BERT have shown effectiveness, especially in languages apart from English. These approaches have focused on languages such as Spanish, Portuguese, and Bengali, and have used personalized BERT variants that were pre-trained specifically for these languages, working on monolingual approaches. The goal of our paper is to build a multilingual BERT-based semi-supervised generative adversarial model. This method involves simultaneously training in many languages, including English, German, and Hindi within labeled and unlabeled data, in order to share linguistic features. The primary goal of this research is to determine the influence of GAN-based algorithms in the context of multilingual text classification, with a particular emphasis on their performance on unlabeled datasets.
3 3.1
Methodology Semi-supervised Generative Adversarial Network: SS-GAN
Starting with understanding the general concept of Generative Adversarial Networks, GAN was first introduced by Goodfellow et al., 2014 [14], composed basically from two components: a “generator” (G) and a “discriminator” (D). During training, the generator generates synthetic data while the discriminator determines whether the data is real or fake. In this context, G aims to generate data samples that increase the difficulty for D to recognize them from real data, whereas the latter aims to enhance its capacity to distinguish between these data samples. As a result, G generates progressively more realistic data. After that, Salimans et al. [15] introduced, in 2016, Semi-Supervised SS-GANs, a variant of GANs that enables semi-supervised learning in GAN network, which means that D allocates also a label to the data samples. Overall, Table 1 sums up a simple illustration of the roles and related loss functions in mathematical formulas of both GAN’s D and G. First of all, let preal and pg denote the real data and generated data distribution respectively, p(ˆ y = y|x, y = k + 1) the probability that a sample data x is associated with the fake class, and p(ˆ y = y|x, y ∈ (1...k)) the probability that x is considered real.
196
K. Mnassri et al.
Table 1. Roles and Loss Functions for the Discriminator D and Generator G in SSGAN frameworks D
G
Role
Training within (k + 1) labels, D assigns “real” samples to one of the designated (1, ..., k) labels, whereas allocating the generated samples to an additional class labeled as k + 1
Generating samples that are similar to the real distribution preal as much as possible.
Loss function
L = Lsup + Lunsup L is the error of correctly identifying where: fake y = y|x, y ∈ samples by D Lsup = −Ex,y∼preal log[p(ˆ (1, . . . , k))] L = Lmatching + Lunsup where: and y = y|x, y = Lmatching = Lunsup = −Ex,y∼preal log[1 − p(ˆ k + 1)] Ex∼preal f (x) − Ex∼G f (x)22 y = y|x, y = k + 1)] and −Ex∼G log[p(ˆ y = y|x = Lunsup = −Ex∼G log[1 − p(ˆ k + 1)]
Lsup is the error in wrongly assigning a label to a real data sample. Lunsup is the error in wrongly assigning a fake label to a real (unlabeled) data sample. f (x) represents the activation or feature representation on an intermediate layer of D. Lmatching is the distance between the feature representations of real and generated data.
3.2
SS-GAN-mBERT
Starting with a pre-trained mBERT model1 , we fine-tuned it by adding GAN layers for semi-supervised learning. More specifically, assuming we are working on classifying a sentence s = (s1, ..., sn) over k classes, mBERT outputs an n + 2 vector representations in Rd : (hCLS , hs1 ...hsn , hSEP ). As a result, hCLS representation will be used as a sentence embedding for our classification task. As illustrated in Fig. 1, we combined the GAN architecture on top of mBERT by including an adversarial generator G and a discriminator D for final classification. We took both G and D as a multi-layer perception MLP. First of all, G takes a 50-dimensional noise vector and generates a vector hf ake ∈ Rd . Then, this vector can be received by the discriminator D along with the representation vector of real data (labeled and unlabeled) produced by mBERT: hCLS . After that, the last layer of the discriminator D, which is a softmax activation layer, will output 3 vectors of logits (for the 3 classes for our task: ‘hateful and offensive’, ‘normal’, and ‘is real or fake?’ classes). More specifically, during training, if real data are sampled (h = hCLS ), D will classify them into the 2 classes of the hateful data (‘hateful and offensive’ or ‘normal’), otherwise, if h = hf ake , D will classify them into all of the 3 classes. 1
https://github.com/google-research/bert/blob/master/multilingual.md.
Multilingual Hate Speech Detection Using Semi-supervised GAN
197
Fig. 1. Structure of SS-GAN-mBERT model for multilingual hate speech detection. “L” refers to labeled data subset & “U” refers to unlabeled data subset. Given a random noise vector, The GAN generator G generates fake data samples and outputs vectors hf ake ∈ Rd, which are used as input to the discriminator D, along with the representations of L and U data executed by mBERT as hCLS ∈ Rd vectors for each of the given languages.
No Cost at Inference Time: During the inference phase, the generator G is no longer utilized after training, but the remainder of the original mBERT model and the discriminator D are maintained for classification (inference phase). This means that utilizing the model for final classification doesn’t require any additional computational resources overhead [19].
4
Experiments and Results
4.1
Dataset
In the HASOC track at FIRE 2019, Mandl et al. [24] created an IndoEuropean Language corpora for Hate Speech and Offensive Content identification, extracted from Twitter and Facebook. They provided three publicly available datasets2 in English, German, and Hindi, which presents respectively 40.82%, 26.63% and 32.54% of the total training dataset. For each language, they provide the train and test datasets labeled in three subtasks. In the first subtask, the data is binary labeled into (HOF) Hate and Offensive, and (NOT) Non Hate-Offensive. Figure 2 displays the class distribution of each language in this training dataset. As for the test set, English contain 34,71%, German 25.59%, and Hindi 39.68%. 2
https://hasocfire.github.io/hasoc/2019/.
198
K. Mnassri et al.
Fig. 2. Class distribution over languages in HASOC2019 training dataset. Note: In this corpora, English presents 40.82%, German 26.63%, and Hindi 32.54%.
In our work, we considered the first subtask. Taking the training set, we divided it into 80% (∼11.5k) for the Unlabeled set (U), and 20% (∼3k) for the Labeled set (L), keeping the same class distribution. We selected this division because we aim to prove the efficiency of using GAN to train on small-size labeled datasets. We also present the evolution of our SS-GAN-mBERT model’s performance (F1 macro score) using progressive percentages of labeled dataset L. We analyze the influence of increasing this amount of data in order to prove the importance of implementing GAN within a pre-trained language model to be efficient enough with the least amount of labeled data. This means that even with few annotated samples, SS-GAN-mBERT can give pretty good classification results, unlike using pre-trained language models alone, which require a lot of annotated datasets to be able to give similar performance. 4.2
Experiments and Analysis
Training Scenarios. We are focusing on training two models, SS-GANmBERT and baseline semi-supervised mBERT. First of all, as part of our multilingual approach, our training process will consider all three languages of our dataset (English, German, and Hindi). Utilizing linguistic features and patterns that are shared across these languages, we aim to analyze the influence of this method on our model performance. As a result, we will evaluate model results for each language separately using separate test sets provided by HASOC2019 for our evaluation process. Adding to that, we will consider a cross-lingual scenario, we will train our models on the English dataset because of its rich linguistic resources and its size compared to the other two languages. Then, using a zero-
Multilingual Hate Speech Detection Using Semi-supervised GAN
199
shot learning paradigm for the other two languages, we will evaluate these models. Lastly, by training models separately on each language, we are investigating the monolingual scenario. This method contributes to a richer understanding of model behavior across many linguistic contexts by providing insights into the complexities and difficulties unique to each language. Models Implementation. Based on the computational resources used in the training process, we made the architecture of GAN as simple as possible. In fact, the Generator is implemented as a Multi-Layer Perceptron MLP with one hidden layer, it is used to generate fake data vectors. More specifically, it transforms noise vectors, which are extracted from a standard normal distribution N (0, 1) (Where its values are sampled from the standard normal probability distribution with a mean (μ) of 0 and a standard deviation (σ) of 1). The generator consists of a linear layer that transforms the input noise vector of size 50 to a hidden size vector of 512, followed by a 0.2 LeakyReLU activation layer and a dropout layer with a rate of 0.1. Similar to the generator, the discriminator is another MLP with one hidden layer, it is composed of a linear transformation layer with a 0.2 leakyReLU activation, followed by dropout layer (of rate 0.1). The linear layer outputs class logits with 3 outputs including a separate class for fake/real data. These logits are then directed to a softmax activation layer in order to derive class probabilities. This architecture is used for our final classification task. To build our SS-GAN-mBERT model, we used “BERT-Base Multilingual Cased”3 : trained on 104 languages, this transformer is composed of 12 layers, 768 hidden size and 12 attention heads, and it has 110M parameters. We selected the model state ‘Cased’ as it’s mainly suggested for languages with non-Latin alphabets (e.g. Hindi). Moreover, our models have been implemented using Pytorch4 and trained using batch size of 32 on Google Colab Pro5 (V100 GPU environment with 32 GB of RAM). We set the maximum length variable to 200, and we train our models on 5 epochs, with a learning rate of 1e − 5 and AdamW optimizers for both the discriminator and the generator. We used Accuracy and F1 macro score as evaluation metrics to measure our models results displayed in Table 2. Results and Analysis. Considering the three training paradigms: Monolingual, zero-shot Cross-lingual, and Multilingual, the results in Table 2 illustrate that SS-GAN-mBERT consistently outperforms the baseline mBERT. In the context of multilingual training scenario, SS-GAN-mBERT proved to be an effective option for improving performance, achieving the highest overall results, compared to monolingual and cross-lingual training. The model shows 6.5% increase in accuracy and 6.4% rise in F1 score in Hindi, compared to the baseline 3 4 5
https://github.com/google-research/bert/blob/master/multilingual.md. https://pytorch.org/. https://colab.research.google.com/signup.
200
K. Mnassri et al.
Table 2. Results of SS-GAN-mBERT in monolingual, cross-lingual and multilingual training on HASOC2019 dataset. English Acc. F1
German Acc. F1
Baseline mBERT 0.638 0.601 0.842 SS-GAN-mBERT 0.731 0.673 0.811 Cross-lingual training Baseline mBERT 0.657 SS-GAN-mBERT 0.704 Multilingual training Baseline mBERT 0.736 0.699 0.820 SS-GAN-mBERT 0.753 0.708 0.771 In cross-lingual training, we implement zero-shot learning: training on German and Hindi. Monolingual training
Hindi Acc.
0.485 0.696 0.538 0.754 0.502 0.567 0.561 0.636 0.583 0.737 0.609 0.783 on English and
F1 0.693 0.754 0.557 0.63 0.736 0.783 testing
model. These results highlight the model’s efficiency in employing multilingual data to improve its linguistic representation and hence increase its classification capability. The same improvement is highlighted in zero-shot cross-lingual training, where SS-GAN-mBERT demonstrated the highest results getting to ∼12% increase in both the accuracy and in F1 macro score for Hindi. This doesn’t hide both of the models’ remarkable results in the monolingual paradigm getting the most increased accuracy of ∼84% in German. Overall, with SS-GAN-mBERT continually surpassing the baseline in all training situations, this underlines the effectiveness of adversarial training in improving the model’s capacity to recognize fine-grained linguistic features, which proved to be enhanced further with the increase in the number of languages. Adding to that, since we’re dealing with an imbalanced dataset, we’re considering F1 macro scores to analyze the performance of our models, thus, comparing the languages output, we can say that our models gain the highest performance in Hindi. This can be due to the size of the corresponding dataset, which is bigger than German, so it’s reasonable why getting the lower performance for the latter language.
5 5.1
Discussions and Future Directions Discussions
Improving Performance Through Iterative Labeled Data Increase: Based on the results we obtained, as illustrated in Table 2, we took the best training paradigm, which is multilingual training, tested on Hindi, and we reiterated the training of both of the models while progressively increasing the annotated dataset L. Maintaining the same size of unlabeled material U, we start by sampling only 1% of L (which presents very few samples, 29 samples), then raising the labeled set size with 5%, 10%, 20% etc. As we already explained in previous Subsect. 4.2, we will consider F1 macro score metric. Based on Fig. 3, we can clearly observe the difference between the baseline and SS-GAN-mBERT models, especially when using the smallest percentage of L data, and even with the use of almost the total amount of labeled data (80%–90%), the baseline couldn’t reach the performance of SS-GAN-mBERT.
Multilingual Hate Speech Detection Using Semi-supervised GAN
201
Fig. 3. F1 Score Progress on Hindi: Baseline mBERT vs SS-GAN-mBERT in multilingual training.
Moreover, it was also evident that SS-GAN-mBERT managed to reach the same performance as the baseline model, with a very less amount of labeled data (e.g. we can see the same F1 macro score attained by SS-GAN-mBERT with 1% of L while the baseline needed more than 6% to reach it). Another aspect to consider is the requirement for labeled data. In fact, in this semi-supervised framework (whether within GAN-mBERT or mBERT alone), we can see that with the training unlabeled sets provided U, both of the models didn’t need a big volume of annotated data. More specifically, as presented in Fig. 3, baseline mBERT started giving F1 macro score of more than 0.7 with ∼40% of L while SS-GAN-mBERT needed only ∼30% to reach this performance, this indicates the benefits of implementing SS-learning as it helps to reduce the necessity to data labeling. Overall, we managed to show, through these experiments, that the need for annotated instances is reduced when the GAN structure is applied over SSmBERT, it can be reduced more when further improving the structure of GAN, which could be our next step in future work to implement more complex GAN structures with more hidden layers in both the generator and the discriminator. Computational Cost at Inference Time: Considering the cost at inference time as already mentioned in Subsect. 3.2, we measured the time both of the models took in each of the training paradigms, and we didn’t observe a huge difference (the maximum time gap was 16 min in one training scenario), which proves that the training time of SS-GAN-mBERT remains quite similar to that of the baseline model. This suggests that the SS-GAN-mBERT is an effective choice for situations where both training efficiency and robustness are important because its usefulness in inference time doesn’t require significantly more extended training duration. However, this is still related to the simple structure of our GAN’s generator (MLP), which could increase the time gap when imple-
202
K. Mnassri et al.
menting a more complex structure. Overall, this opens new directions we aim to examine for future research. 5.2
Future Directions
We have chosen a constant noise vector of size 50 as input to our GAN’s generator. We selected this value based on the results of the first experiments we made and on the computational efficiency provided. In the future, we aim to develop strategies that automatically optimize the generator to set the best noise vector size for any dataset. For instance, Wasserstein GAN could help provide the diversity of the data produced by the generator, thus, improving training stability [25]. Moreover, in dealing with the problem of class imbalance, we aim to reduce the effect of this issue by implementing new data augmentation solutions such as back translation [22], or working on GAN’s data augmentation. Although this task still needs more work to improve GAN’s accuracy, there have been many good attempts we aim to explore such as Conditional GAN [26]. Furthermore, we aim to generalize better and employ more advanced multilingual Large Language Models (LLMs) like BLOOM, GPT-3. Although this procedure requires more computational resources, we aim to start with smaller architectures like GPT-2, and Distil-GPT [27] and we seek to explore their performance within the SS-GAN model for future research.
6
Conclusion
In this paper, we introduced a Semi-Supervised Generative Adversarial SSGAN-mBERT model, which achieved remarkable performance in both multilingual and zero-shot cross-lingual hate speech detection for English, German, and Hindi. Our method emphasizes the usefulness of using semi-supervised learning to address the challenge of data labeling scarcity, yielding impressive results, which were further improved via Generative Adversarial Network (GAN).
References 1. Social Media and Democracy: The State of the Field, Prospects for Reform. SSRC Anxieties of Democracy. Cambridge University Press (2020) 2. Fortuna, P., Nunes, S.: A survey on automatic detection of hate speech in text. ACM Comput. Surv. 51(4), 1–30 (2018) 3. Pamungkas, E.W., Basile, V., Patti, V.: Towards multidomain and multilingual abusive language detection: a survey. Pers. Ubiquit. Comput. 27(1), 17–43 (2023) 4. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, vol. 1, pp. 4171–4186 (2019)
Multilingual Hate Speech Detection Using Semi-supervised GAN
203
5. Mozafari, M., Farahbakhsh, R., Crespi, N.: A BERT-based transfer learning approach for hate speech detection in online social media. In: Cherifi, H., Gaito, S., Mendes, J.F., Moro, E., Rocha, L.M. (eds.) COMPLEX NETWORKS 2019. SCI, vol. 881, pp. 928–940. Springer, Cham (2020). https://doi.org/10.1007/978-3-03036687-2 77 6. Mozafari, M., Farahbakhsh, R., Crespi, N.: Hate speech detection and racial bias mitigation in social media based on BERT model. PLoS ONE 15(8), e0237861 (2020) 7. Mnassri, K., Rajapaksha, P., Farahbakhsh, R., Crespi, N.: BERT-based ensemble approaches for hate speech detection. In: IEEE GLOBECOM, pp. 4649–4654 (2022) 8. Mnassri, K., Rajapaksha, P., Farahbakhsh, R., Crespi, N.: Hate speech and offensive language detection using an emotion-aware shared encoder. arXiv preprint arXiv:2302.08777 (2023) 9. Mozafari, M., Farahbakhsh, R., Crespi, N.: Cross-lingual few-shot hate speech and offensive language detection using meta learning. IEEE Access 10, 14880–14896 (2022) 10. Kov´ acs, G., Alonso, P., Saini, R.: Challenges of hate speech detection in social media: data scarcity, and leveraging external resources. SN Comput. Sci. 2, 1–15 (2021) 11. Yin, W., Zubiaga, A.: Towards generalisable hate speech detection: a review on obstacles and solutions. PeerJ Comput. Sci. 7, e598 (2021) 12. D’Sa, A.G., Illina, I., Fohr, D., Klakow, D., Ruiter, D.: Label propagation-based semi-supervised learning for hate speech classification. In: Proceedings of the First Workshop on Insights from Negative Results in NLP, Online, November 2020, pp. 54–59. Association for Computational Linguistics (2020) 13. Alsafari, S., Sadaoui, S.: Semi-supervised self-learning for Arabic hate speech detection. In: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 863–868 (2021) 14. Goodfellow, I., et al.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020) 15. Salimans, T., et al.: Improved techniques for training GANs. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29. Curran Associates Inc. (2016) 16. Tanvir, R., et al.: A GAN-BERT based approach for Bengali text classification with a few labeled examples. In: 19th International Conference on Distributed Computing and Artificial Intelligence, pp. 20–30 (2023) 17. Cao, R., Lee, R.K.-W.: HateGAN: adversarial generative-based data augmentation for hate speech detection. In: Proceedings of the 28th International Conference on Computational Linguistics, Online, Barcelona, Spain, December 2020, pp. 6327– 6338. International Committee on Computational Linguistics (2020) 18. Yu, L., Zhang, W., Wang, J., Yu, Y.: SeqGAN: sequence generative adversarial nets with policy gradient. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 2852–2858 (2017) 19. Croce, D., Castellucci, G., Basili, R.: GAN-BERT: generative adversarial learning for robust text classification with a bunch of labeled examples. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, July 2020, pp. 2114–2119 (2020) 20. Jiang, S., Cormier, S., Angarita, R., Rousseaux, F.: Improving text mining in plant health domain with GAN and/or pre-trained language model. Fronti. Artif. Intell. 6, 1072329 (2023)
204
K. Mnassri et al.
21. Jain, P.K., Quamer, W., Pamula, R.: Consumer sentiment analysis with aspect fusion and GAN-BERT aided adversarial learning. Exp. Syst. 40(4), e13247 (2023) 22. Ta, H.T., Rahman, A.B.S., Najjar, L., Gelbukh, A.: GAN-BERT: adversarial learning for detection of aggressive and violent incidents from social media. In: Proceedings of IberLEF, CEUR-WS (2022) 23. Santos, R.B., Matos, B.C., Carvalho, P., Batista, F., Ribeiro, R.: Semi-supervised annotation of Portuguese hate speech across social media domains. In: Cordeiro, J., Pereira, M.J., Rodrigues, N.F., Pais, S. (eds.) 11th SLATE Conference, vol. 104, pp. 11:1–11:14 (2022) 24. Mandl, T., et al.: Overview of the HASOC track at FIRE 2019: hate speech and offensive content identification in Indo-European languages. In: Proceedings of the 11th Annual Meeting of the Forum for Information Retrieval Evaluation, pp. 14–17. Association for Computing Machinery (2019) 25. de Rosa, G.H., Papa, J.P.: A survey on text generation using generative adversarial networks. Pattern Recogn. 119(C), 108098 (2021) 26. Silva, K., Can, B., Sarwar, R., Blain, F., Mitkov, R.: Text data augmentation using generative adversarial networks - a systematic review. J. Comput. Appl. Linguist. 1, 6–38 (2023) 27. Yu, Z.Z., Jaw, L.J., Jiang, W.Q., Hui, Z.: Fine-tuning language models with generative adversarial feedback (2023)
Exploring the Power of Weak Ties on Serendipity in Recommender Systems Wissam Al Jurdi1,2(B) , Jacques Bou Abdo3 , Jacques Demerjian2 , and Abdallah Makhoul1 1
University Bourgogne Franche-Comt´e, FEMTO-ST Institute, CNRS, Montb´eliard, France wissam.al [email protected], [email protected] 2 Lebanese University, LaRRIS, Faculty of Sciences, Fanar, Lebanon [email protected] 3 University of Cincinnati, Cincinnati, USA [email protected]
Abstract. With our increasingly refined online browsing habits, the demand for high-grade recommendation systems has never been greater. Improvements constantly target general performance, evaluation, security, and explainability, but optimizing for serendipitous experiences is imperative since a serendipity-optimized recommender helps users discover unforeseen relevant content. Given that serendipity is a form of genuine unexpected experiences and recommenders are facilitators of user experiences, we aim at leveraging weak ties to explore their impact on serendipity. Weak links refer to social connections between individuals or groups that are not closely related or connected but can still provide valuable information and opportunities. On the other hand, the underlying social structure of recommender datasets can be misleading, rendering traditional network-based approaches ineffective. For that, we developed a network-inspired clustering mechanism to overcome this obstacle. This method elevates the system’s performance by optimizing models for unexpected content. By leveraging group weak ties, we aim to provide a novel perspective on the subject and suggest avenues for future research. Our study can also have practical implications for designing online platforms that enhance user experience by promoting unexpected discoveries. Keywords: recommender systems ties · social science
1
· clustering · serendipity · weak
Introduction
Recommendation systems utilize complex algorithms to analyze large datasets and generate personalized user recommendations. They have proven to be highly effective in various industries, including e-commerce and media [1], and have been shown to improve user engagement and satisfaction significantly [2]. Recommenders are typically designed utilizing data features such as users’ search c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 205–216, 2024. https://doi.org/10.1007/978-3-031-53503-1_17
206
W. Al Jurdi et al.
and interaction details. Creating an effective recommendation system that aligns with the primary goal of recommenders remains a subjective and very challenging [3] task despite the abundance of research, convenient models, and diversified workflows, such as the recent release of Microsoft’s best practices for building recommendation systems [4]. The main goal of recommender systems is to provide users with information tailored to their interests [5,6]. Despite the significant agreement on this general definition, research pathways frequently prioritize other aspects that sometimes even impede the attainment of this goal, as the study by Herlocker et al. highlights regarding the accuracy improvement branches that stemmed from a concept in evaluation termed the “magic barrier”, i.e., the point beyond which recommenders fail to become more accurate [7,8]. Nonetheless, there has been recent research that focuses on further refining recommender systems by addressing crucial issues in general performance [9, 10], noise and evaluation [11–13], security [14], and explainability [15]. Despite progress, there is still a notable disparity: designing methods or frameworks that enhance user engagement and knowledge through chance discoveries and exploration of the unknown. The concept of “strength of weak ties” is an influential social science theory that emphasizes the role of weak associations, such as acquaintances, in spreading information and creating opportunities through social networks. Weak ties are more likely to provide novel information than strong ties, such as close friends, who tend to share similar perspectives and resources [16]. As we have highlighted, recommender algorithms aim to suggest information likely to interest a user. Therefore, incorporating weak ties into recommender systems can enhance their effectiveness by presenting users with unforeseen recommendations they may not have otherwise discovered, as one study by Duricic et al. recently hinted at [17]. This performance is not to be confused with general performance in terms of precision or accuracy; it is a little more complex to measure and set up in this case and requires methods beyond the conventional ones [7,11,13]. Surely, weak ties can also help overcome the cold-start problem, where new users have insufficient data to generate personalized recommendations. However, weak links also pose privacy challenges, as users may not want to reveal their preferences or behavior to distant or unknown connections [18]. Our approach uses social network theory to measure the impact of weak connections between user groups on serendipity, making it a vital part of the recommender ecosystem. Narrowing the focus only on chance discoveries allows us to advance recommender enhancement for their primary objective. There are two types of recommender data sources: social and rating [19]. Social datasets have data about user relationships or interactions, such as friendships, likes, etc. Rating datasets have data about user ratings for items or services, such as stars, preferences, etc. In our study, we target the rating-based datasets and recommenders. This research also builds on previous validation and serendipity approaches [11,13,20]. It adapts a new optimization framework to train recommenders oriented towards chance discoveries for users.
Optimizing Recommendations
207
The following section explores the latest research while placing our work in this context. Section 3 introduces our unique approach involving communitybased data processing and cluster assessment. We present the experimental results and analysis in Sect. 4 and conclude with Sect. 5.
2
Background and Related Work
This section provides an overview of the research on serendipity in recommender systems. Currently, there is no established method to specifically improve the chances of discovering new and unexpected recommendations and to increase user involvement in recommenders. This is especially true when it comes to making use of weak connections between clusters. 2.1
Serendipity in Recommenders
Some recent recommender system proposals aim to improve serendipity. For example, Kotkov et al. [21] proposed a new definition of uncertainty in recommenders that considers items that are surprising, valuable, and explainable, arguing that the common understanding and original meaning of serendipity is conceptually broader, requiring serendipitous encounters to be neither novel nor unexpected. Others have proposed a multi-view graph contrastive learning framework that can enhance cross-domain sequential recommendation by exploiting serendipitous connections between different domains [22]. The study by Ziarani et al. [23] is crucial and reviews the overall serendipityoriented approaches in recommender systems. The authors emphasize the significance of serendipity in generating attractive and practical recommendations in recommender systems. This reinforces our introduced concept regarding the direction and primary objective of recommenders in this work’s introduction. The approaches covered in the study generally discuss serendipity enhancements by introducing randomness into the recommendation process. This can lead to discovering new and exciting items that the user may not have otherwise. In addition, serendipity can be enhanced by incorporating diversity into the recommendation process, which can help reduce over-specialization and make recommendations more interesting and engaging. The study concludes that while there is no agreement on the definition of serendipity, most studies find serendipitous recommendations valuable and unexpected. In a study about surprise in recommenders by Eugene Yan [24], the importance of a serendipity metric in recommenders is discussed. The author argues that while accuracy is an essential metric for recommendation systems, it is not the only metric that matters. Recommender systems that solely focus on accuracy can lead to information over-specialization, making recommendations boring and predictable. The author suggests incorporating serendipity as a criterion for making appealing and valuable recommendations to address this issue. Serendipity is a criterion for making unexpected and relevant recommendations to the user’s interests [23]. The usefulness of serendipitous recommendations
208
W. Al Jurdi et al.
is the main superiority of this criterion over novelty and diversity. The article highlights that serendipity can be measured using various metrics such as surprise, unexpectedness, and relevance [24]. The article further explains that serendipity-oriented recommender systems have been the focus of many studies in recent years. The author conducted a systematic literature review of previous studies on serendipity-oriented recommender systems. The review focused on the contextual convergence of serendipity definitions, datasets, serendipitous recommendation methods, and their evaluation techniques [23]. The review results indicate that the quality and quantity of articles in the serendipity-oriented recommender systems are progressing. In conclusion, incorporating serendipity as a criterion for making recommendations can help make them more appealing and valuable. It can also help address issues related to information over-specialization and make recommendations more diverse. One of the studies by Bhandari et al. [25] proposes a method for recommending serendipitous apps using graph-based techniques. The approach can recommend apps even if users do not specify their preferences and can discover highly diverse apps. The authors also introduce randomness into the recommendation process to increase the likelihood of finding new and exciting items that the user may not have discovered otherwise. Therefore, similar to the studies covered in [23], this unique process of app recommendations also uses the same method of randomness. 2.2
Recommendations and Social Network Connections
Another proposal by M. Jenders et al. [26] introduces a content-based recommendation technique focusing on the serendipity of news recommendations. Serendipitous recommendations have the characteristic of being unexpected yet fortunate and interesting to the user and thus might yield higher user satisfaction. The authors explore the concept of serendipity in the area of news articles and propose a general framework that incorporates the benefits of serendipity and similarity-based recommendation techniques. An evaluation against other baseline recommendation models is carried out in a user study. Based on the studies mentioned above, it is clear that enhancing serendipity is a crucial step in improving recommender systems. However, there is currently no established framework for achieving this goal besides incorporating randomization into the system.
3
Community-Based Mechanism
To establish weak-connection-based recommendations, multiple steps are involved in utilizing recommender data. We can ideally set two kinds of connections: social-based links [19] or non-social links inferred from user behavior. In our experimentation, we introduce the latter and develop an approach that could be expanded further if recommender datasets were enriched with more information, particularly those with social and rating-based components. The
Optimizing Recommendations
209
diagram shown in Fig. 1 depicts the metadata level, where we aim to enhance the recommendations by processing data differently through the community-based mechanism. To achieve this, we use an approach inspired by networks theory involving grouping users and utilizing weak links between them and the communities (or groups) they belong to. We use techniques like Gower [27] to form initial user clusters and then create principal collections to establish higher-level communities. Theoretically, this should help us optimize the recommendations to provide more relevant and unexpected suggestions. Next, we refine the training process for the recommender system; this includes modifying the cluster and principal group formation parameters and generating various versions of the potential “weak links” between groups, as depicted in Fig. 1. The aim is to avoid prejudice or overfitting towards a particular set of communities and links.
Fig. 1. This schematic illustrates a high-level difference between normal data processing and group-based processing.
Serendipity-Based Evaluation. While accuracy is vital for recommendation systems, it’s not the only metric that matters. Incorporating serendipity, defined as making unexpected and relevant recommendations, can make recommendations more appealing and valuable. Serendipity can be measured using metrics like surprise, unexpectedness, and relevance. Previous studies on serendipityoriented recommender systems show that incorporating serendipity can help make recommendations more diverse and address issues related to information over-specialization. Following the study by Eugene Yan [24], serendipity can be measured using the following formula: serendipity(i) = unexpectedness(i) × relevance(i)
(1)
Where relevance(i) = 1 if i is interacted upon and 0 otherwise. Alternatively, we use one of several approaches to measure the unexpectedness of recommendations [24,28]. This approach considers some distance metric (e.g., cosine similarity). We compute the cosine similarity between a user’s recommended items
210
W. Al Jurdi et al.
(I) and historical interactions (H). Lower cosine similarity indicates higher unexpectedness: unexpectedness(I, H) =
1 cos(i, h) I
(2)
i∈I h∈H
The overall serendipity can be achieved by averaging all users (U) and all recommended items (I): serendipity(i) =
serendipity(i) 1 count(U ) count(I)
(3)
u∈U i∈I
The following section explains how we form user clusters and groups. As we measure serendipity on the group level, we use a recently proposed group-based validation technique [13] to track performance on smaller data portions, which helps avoid averaging results that may mask essential effects. Therefore, changes in serendipity are measured by changes in its level within groups (e.g., group A in Fig. 1) rather than the overall user serendipity of Eq. 3. User Clusters and Groups. In this section, we cover the process of forming clusters and higher-level groups after discussing the method of evaluating groups of users in the previous section. Two levels are involved in this process - clustering users together at the first level and forming larger groups that can connect and include user groups with weak and strong links at the second level. We experiment with various versions of weak links between user clusters, as there can be multiple variations of higher-level cluster groups. This demonstrates the method’s adaptability to accommodate different datasets. To create the first level of user clusters for datasets like ML-100k, which often have both categorical and non-categorical data, we employ the Gower distance method [27] to produce a distance matrix. This approach calculates the distance between two entities based on their mixed categorical and numerical attribute values. We then use hierarchical clustering to refine the grouping further. For some given features xi = xi1 , ..., xip in a dataset, the Gower similarity matrix can be defined as: p k=1 sijk δijk SGower (xi , xj ) = (4) p k=1 δijk For each feature k = 1, ..., p a score sijk is calculated. A quantity δijk is also calculated with a binary possible value depending on whether the input variables xi and xj can be compared. SGower (xi , xj ) is a similarity score, so the final result is converted through the following equation to achieve a distance metric: √ dGower = 1 − SGower . For numerical variables, the score can be calculated as a simple L1 distance between the two values normalized by the range of the feature Rk : sijk = 1 −
|xik − xjk | Rk
(5)
Optimizing Recommendations
211
For categorical variables, the score will be 1 if the categories are the same and 0 if they are not: Sijk = 1xik = xjk
(6)
Several linkage methods exist to compute distance d(s, t) between two clusters s and t using the distance matrix achieved with Eq. 4. We utilize the generalpurpose clustering algorithm proposed by M¨ ullner [29]. The algorithm begins with a forest of clusters that have yet to be used in the hierarchy being formed. When two clusters s and t from this forest are combined into a single cluster u, s and t are removed from the forest, and u is added to the forest. When only one cluster remains in the forest, the algorithm stops, and this cluster becomes the root. In the following section, we present experimental results for the abovementioned method. The experiment has three main goals: – Investigating the impact of recommending items via weak-linked groups. – Determining whether optimizations in one group can impact others. – Showcasing the effect of utilizing weaker connections alongside group linkage tuning and whether more favorable outcomes can be achieved.
4
Results and Discussions
In this section, we discuss the results of experiments on two open-source datasets, namely the ML-100k [30] and the Epinions [31]. Our work doesn’t focus on a specific recommender algorithm but on the experimentation process. We use LightGCN [32] as an example recommender. LightGCN is a simplified version of Neural Graph Collaborative Filtering (NGCF) that incorporates GCNs and is relatively new. We have created multiple versions of the code and experiment scenarios, all available in the source [33].
Fig. 2. A comparison between the average serendipity value of a group and the baseline value achieved during regular data processing.
212
W. Al Jurdi et al.
Our initial goal is to measure the impact on group serendipity. We plan to achieve this by selecting a formed group and tuning the recommender through training to allow favored recommendations from weakly-linked communities. The second objective is to determine whether this approach affects only the target group or any other group in the dataset that shares familiar users. Figure 2 displays the average group serendipity obtained from one of our experiments on ML-100k. These groups were created through the process explained in Sect. 3. We observed a notable increase in the serendipity factor in multiple groups compared to the baseline process, ranging from 5% to 18%. This baseline process involved standard data processing and training using the same recommender parameters and tuning. However, only two out of the ten groups showed a decrease in the metric result. Table 1. The evaluation metric values for baseline and community-based processes. Metric
Baseline Group Avg. Change (%)
Precision 0.2897
0.2288
−21.04
MAP
0.1424
0.0993
−30.26
NDCG
0.3716
0.2874
−22.65
Recall
0.2440
0.1866
−23.50
Coverage 0.3610
0.1741
−51.76
After analyzing the offline metric results of the system, it is evident that all of them experienced a decrease compared to the baseline run. This decrease can be attributed to the increase in serendipity, which leads to a corresponding decline in precision and recall. The results can be viewed in Table 1. However, we must remember that offline evaluation is not enough to determine actual relevance. Through online experimentation, we can accurately gauge the model’s effectiveness [24]. One of the interesting findings is a decrease in coverage. As explained in Sect. 2, increasing coverage (or introducing more randomness) in the dataset typically leads to an increase in serendipity during offline evaluation, which is a limitation of using this measure offline instead of in an A/B test. However, we took steps to minimize this effect by ensuring that our final recommendations were unbiased and that we did not filter out items from the long tail. Online tests can improve the validation of the serendipity metric. Several small tests have shown that users converge more with recommenders with lower accuracy and precision metrics [23,24]. Therefore, our results are in line with this trend. Subsequently, an exploration is conducted to determine the potential impact of optimizing surprise in one group on the other by utilizing the approach above. We have included the cluster-level outcomes (refer to Fig. 1) for clarity. In Fig. 3, we show the effect of the same serendipity metric but on the cluster level of the ML-100k dataset. The figure shows three cases when our approach is optimized to increase serendipity in one group while measuring the effect on
Optimizing Recommendations
213
Fig. 3. Three scenarios that compare cluster serendipity to the baseline.
the others. It can be noticed here how, with no unique tuning, the result can be better (first figure), almost the same with minor exceptions (middle figure), or worse (last figure). As mentioned in Sect. 3, forming user clusters and groups is sensitive, with multiple possibilities for weak links.
Fig. 4. The distribution of user serendipity metric values as group formations slightly vary.
In Sect. 3, we discussed the hierarchical clustering method. This method can simplify the dendrogram and assign data points to individual clusters. The assigned clusters are determined by a distance threshold, denoted as t. A smaller threshold will allow even the closest data points to form a cluster, while a more significant threshold can result in too many clusters and few communities. By varying the value of t, we can produce different group representations that could affect the outcome of the metrics obtained in the initial stage of the experiments. To address this, we conduct multiple iterations that result in diverse weaklinked groups. Subsequently, we implement recommendations and re-evaluate the serendipity metric to determine any impact on the results. The boxplot in Fig. 4 displays the results. In the ml-100k scenario (on the left), we can observe an overall rise in user serendipity after the initial three iterations. This suggests an enhanced surprise element for most groups, as previously demonstrated in an experiment. We attain the highest value at approximately t = 6.48, corresponding to the optimal distance between the groups formed. This
214
W. Al Jurdi et al.
distance has a positive impact on serendipitous recommendations for almost all groups. We achieved the same outcome for the Epinions dataset, although the parameter scale and the optimal distance between groups differed slightly. The best results were obtained with values between t = 7 and t = 8.5, and we found that further adjustments did not significantly improve results for most of the clusters. Using varied weak connections between groups can improve outcomes. Testing multiple scenarios helps find the best distance for each optimization run.
5
Conclusions and Future Work
This study emphasizes the significance of prioritizing chance discoveries for users in recommender systems. We developed a method based on social network theory, which utilizes weak links to enhance recommender performance. As non-social links inferred from user behavior have not previously been used, we created a process for it in this work. This involves strategically forming communities rather than introducing random data. Our experiment yielded a positive result in enhancing the level of unexpectedness and surprise for users within the system. We demonstrated that recommending items through weak-linked communities among different users favors surprise and user engagement, as measured by a serendipity metric. This was achieved without any intentional randomness introduced into the data. The clustering and grouping process can be improved by tuning with enhanced data for recommender systems, specifically data that reinforces social and rating-based behaviors within communities. Alternatively, the measure of serendipity is impacted by the items recommended from the long tail. While we avoided any bias in suggesting long-tail items, conducting A/B tests to confirm convergence and not rely solely on offline tests would be beneficial. Finally, future research could explore social clustering to validate whether different effects can be achieved on the serendipity of the model. Acknowledgments. This work has been supported by the Lebanese University Research Program.
References 1. Jannach, D., Jugovac, M.: Measuring the business value of recommender systems. ACM Trans. Manage. Inf. Syst. (TMIS) 10(4), 1–23 (2019) 2. Zhang, S., Yao, L., Sun, A., Tay, Y.: Deep learning based recommender system: a survey and new perspectives. ACM Comput, Surv. (CSUR) 52(1), 1–38 (2019) 3. Zhou, T., Kuscsik, Z., Liu, J.-G., Medo, M., Wakeling, J.R., Zhang, Y.-C.: Solving the apparent diversity-accuracy dilemma of recommender systems. Proc. Natl. Acad. Sci. 107(10), 4511–4515 (2010) 4. Argyriou, A., Gonz´ alez-Fierro, M., Zhang, L.: Microsoft recommenders: best practices for production-ready recommendation systems. In: 2020 Companion Proceedings of the Web Conference, pp. 50–51 (2020)
Optimizing Recommendations
215
5. Ricci, F., Rokach, L., Shapira, B.: Recommender systems: techniques, applications, and challenges. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook. Springer, New York (2022). https://doi.org/10.1007/978-1-0716-219741 6. Nabizadeh, A., Jorge, A., Leal, J.P.: Long term goal oriented recommender systems. In: 11th International Conference on Web Information Systems and Technologies (2015) 7. Herlocker, J.L., Konstan, J.A., Terveen, L.G., Riedl, J.T.: Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst. (TOIS) 22(1), 5–53 (2004) 8. McNee, S.M., Riedl, J., Konstan, J.A.: Being accurate is not enough: how accuracy metrics have hurt recommender systems. In: CHI’06 Extended Abstracts on Human Factors in Computing Systems, pp. 1097–1101 (2006) 9. Ma, T., Wang, X., Zhou, F., Wang, S.: Research on diversity and accuracy of the recommendation system based on multi-objective optimization. Neural Comput. Appl. 35(7), 5155–5163 (2023) 10. Yannam, V.R., Kumar, J., Babu, K.S., Patra, B.K.: Enhancing the accuracy of group recommendation using slope one. J. Supercomput. 79(1), 499–540 (2023) 11. Al Jurdi, W., Abdo, J.B., Demerjian, J., Makhoul, A.: Critique on natural noise in recommender systems. ACM Trans. Knowl. Disc. Data (TKDD) 15(5), 1–30 (2021) 12. Al Jurdi, W., Abdo, J.B., Demerjian, J., Makhoul, A.: Strategic attacks on recommender systems: an obfuscation scenario. In: 2022 IEEE/ACS 19th International Conference on Computer Systems and Applications (AICCSA), pp. 1–8. IEEE (2022) 13. Al Jurdi, W., Abdo, J.B., Demerjian, J., Makhoul, A.: Group validation in recommender systems: framework for multi-layer performance evaluation. arXiv preprint arXiv:2207.09320 (2022) 14. Pramod, D.: Privacy-preserving techniques in recommender systems: state-of-theart review and future research agenda. Data Technol. Appl. 57(1), 32–55 (2023) 15. Chatti, M.A., Guesmi, M., Muslim, A.: Visualization for recommendation explainability: a survey and new perspectives. arXiv preprint arXiv:2305.11755 (2023) 16. Granovetter, M.S.: The strength of weak ties. Am. J. Sociol. 78(6), 1360–1380 (1973) 17. Duricic, T., Lacic, E., Kowald, D., Lex, E.: Exploiting weak ties in trust-based recommender systems using regular equivalence. arXiv preprint arXiv:1907.11620 (2019) 18. Ramakrishnan, N., Keller, B.J., Mirza, B.J., Grama, A.Y., Karypis, G.: When being weak is brave: privacy in recommender systems. arXiv preprint cs/0105028 (2001) 19. Shokeen, J., Rana, C.: Social recommender systems: techniques, domains, metrics, datasets and future scope. J. Intell. Inf. Syst. 54(3), 633–667 (2020) 20. Al Jurdi, W., El Khoury Badran, M., Jaoude, C.A., Abdo, J.B., Demerjian, J., Makhoul, A.: Serendipity-aware noise detection system for recommender systems. In: Proceedings of the Information and Knowledge Engineering (2018) 21. Kotkov, D., Medlar, A., Glowacka, D.: Rethinking serendipity in recommender systems. In: Proceedings of the 2023 Conference on Human Information Interaction and Retrieval, pp. 383–387 (2023) 22. Wang, Y., Min, Y., Chen, X., Ji, W.: Multi-view graph contrastive representation learning for drug-drug interaction prediction. In: Proceedings of the Web Conference 2021, pp. 2921–2933 (2021)
216
W. Al Jurdi et al.
23. Ziarani, R.J., Ravanmehr, R.: Serendipity in recommender systems: a systematic literature review. J. Comput. Sci. Technol. 36, 375–396 (2021) 24. Yan, E.: Serendipity: accuracy’s unpopular best friend in recommender systems. Towards Data Science, April 2020 25. Bhandari, U., Sugiyama, K., Datta, A., Jindal, R.: Serendipitous recommendation for mobile apps using item-item similarity graph. In: Banchs, R.E., Silvestri, F., Liu, T.-Y., Zhang, M., Gao, S., Lang, J. (eds.) AIRS 2013. LNCS, vol. 8281, pp. 440–451. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-450686 38 26. Jenders, M., Lindhauer, T., Kasneci, G., Krestel, R., Naumann, F.: A serendipity model for news recommendation. In: H¨ olldobler, S., Kr¨ otzsch, M., Pe˜ naloza, R., Rudolph, S. (eds.) KI 2015. LNCS (LNAI), vol. 9324, pp. 111–123. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24489-1 9 27. Gower, J.C.: A general coefficient of similarity and some of its properties. Biometrics 27, 857–871 (1971) 28. Panagiotis, A.: On unexpectedness in recommender systems: or how to expect the unexpected. In: Proceedings of the Workshop on Novelty and Diversity in Recommender Systems (DiveRS 2011), at the 5th ACM International Conference on Recommender Systems (RecSys 2011), pp. 11–18 (2011) 29. M¨ ullner, D.: Modern hierarchical, agglomerative clustering algorithms. arXiv preprint arXiv:1109.2378 (2011) 30. Harper, F.M., Konstan, J.A.: The MovieLens datasets: history and context. ACM Trans. Interact. Intell. Syst. (TIIS) 5(4), 1–19 (2015) 31. Richardson, M., Agrawal, R., Domingos, P.: Trust management for the semantic web. In: Fensel, D., Sycara, K., Mylopoulos, J. (eds.) ISWC 2003. LNCS, vol. 2870, pp. 351–368. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3540-39718-2 23 32. Wang, X., He, X., Wang, M., Feng, F., Chua, T.-S.: Neural graph collaborative filtering. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 165–174 (2019) 33. Al Jurdi, W., Abdo, J.B.: GitHub Repository: Optimizing Recommendations: A Contemporary Networks-inspired Approach, June 2023
Infrastructure Networks
An Interaction-Dependent Model for Probabilistic Cascading Failure Abdorasoul Ghasemi1,2(B) , Hermann de Meer2 , and Holger Kantz3 1
3
Faculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran [email protected] 2 Faculty of Computer Science and Mathematics, Passau University, Passau, Germany Max Planck Institute for the Physics of Complex Systems, Dresden, Germany Abstract. We suggest interaction-CASCADE as a combined model by extending the CASCADE as a probabilistic high-level model to consider the underlying components’ failure interaction graph, which could be derived using detailed models. In interaction-CASCADE, the total incurred overload after each component failure is the same as the CASCADE; however, the overload transfers to the out neighbors of the failed component given by the interaction graph. We first assume that the component’s initial loads are independent of their out- and in-degrees in the interaction graph and show that even though the process’s dynamics depend on the interaction graphs’ structure, the critical load beyond which the probability of total failure is significant does not change. We then discuss that assigning the lighter loads to components with higher in-degrees can shift the minimum critical load to higher values. Simulation results for random Erd˝ os-R´enyi and power-law degree distributed are provided and discussed. Keywords: failure cascading
1
· power network · interaction graph
Introduction
Cascading failure is a high-risk event in which the overall cost, e.g., the number of shutdown users or the amount of shedding load in the power grid, increases in the same order as the probability of the event decreases. An exogenous incident in the power network triggers cascading and leads to uncontrolled successive loss of network components because of complex endogenous interactions among the components failures [1]. These counterintuitive interactions behave differently for different groups of failures and might be non-local, i.e., a far distance component is affected without affecting the in-between components. There are different explanations for the origins of failure cascading in power networks using the self-organized criticality models [2,3] and the highly optimized tolerance model of the robust complex systems [4]. A recent study relates the massive blackouts in power grids to the inherited power-law nature of city inhabitants [5]. Even though this analysis is based on a questionable assumption c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 219–230, 2024. https://doi.org/10.1007/978-3-031-53503-1_18
220
A. Ghasemi et al.
of power-law distribution for line capacities, it highlights the possible impact of interactions between system components’ failure on the cascade process. Equivalently important, there are studies to model how failure cascading unfolds in the network. Cascade modeling in networked systems is crucial since experimental studies of failure cascading are impossible, the available recorded data is rare, and various exogenous and endogenous disturbances affect the process. Therefore, considering adequate detail in modeling is important [1]. Models can be classified as detailed, high-level, and combined in the big picture; each has its viewpoint on the problem [6]. Detailed models consider the power network’s physics and constraints, aiming to reflect the nature of flow redistribution after initial failure, considering the system constraints like the lines and generators’ capacities and the decision policies for load balancing. If the flow of a component exceeds its capacity, that component will fail and cause new flow redistribution and possible consequent failures [7]. These models capture the engineering details of the process. However, they typically do not end with holistic insights into the collective behavior of the system following deterministic system behavior and bounding to the current known knowledge and limited uncertainties. High-level models capture the cascade process’s collective behavior, ignoring details of individual component models and arguing that these details do not affect the system’s global behavior. For example, the threshold model assigns a threshold to each node, and if the fraction of a given node’s neighbors whose states change during the process exceeds that node’s threshold, its state will change [8]. High-level models are more abstract; however, they can quickly predict the possible collective behavior profile of the cascade under various uncertainties. Combined models aim to consider both the details of the system’s physics and the holistic analysis of the complex system perspective. [9] proposes a combined model by finding a weighted directed interaction graph from the available data under some assumptions and then utilizing these interactions in a probabilistic high-level model to predict the cascade unfolding. This paper extends the CASCADE as a probabilistic high-level model [10] to a combined model considering the underlying component failure interaction. We first review the CASCADE and the importance of the failure interaction graph in Sect. 2. Section 3 introduces and analyzes the interaction-CASCADE to generalize the CASCADE model for an arbitrary interaction between the network’s components. Section 4 provides some numerical studies on the cascade behavior under different interaction graph structures and interaction-aware load assignment before concluding in Sect. 5.
2 2.1
CASCADE Model and Interaction Graph CASCADE Model
Dobson et al. in [10] proposed CASCADE as an analytical tractable high-level model for probabilistic cascading, which captures salient features of large black-
Interaction-Dependent
221
outs in power networks. This loading-dependent model predicts an average critical load at which the cascade size has power-law distribution, and beyond that, the probability of total failure is significant. CASCADE models how the additional loading after an initial random disturbance may lead to consecutive failures. Let the system have n components with initial independent loads which are randomly and uniformly distributed according to L ∼ U (Lmin , Lmax ), where Lmin and Lmax are the minimum and maximum load. The system is subject to disturbances; at some point, a small disturbance Δ0 is added to the components’ loads. This disturbance may trigger an initial failure if a component’s load exceeds a given threshold Lf ail . Each failed component incurs an additional load P to the currently survived components, which increases their loads and may cause subsequent failures. The cascade stops when no more overloaded components exist or all components fail. Let j denote the index of cascade stage and Mj , and Sj be the random variables of the number of failed components at stage j and the cumulative failed components up to and including the stage j, Sj = M0 + M1 + . . . + Mj . The cascade size is then given by S = SJ where J is the cascade length 0 ≤ J ≤ n − 1. Note that the process moves to stage j + 1 if the number of failures at stage j satisfies 0 < Mj < N − Sj−1 . Note that if random variable X ∼ U(Lmin , 1) then X − Lmin ∼ U(0, u0 ) where u0 = 1 − Lmin . Therefore, without affecting the process dynamics, we can assume that initial loads follow U(0, u0 ) and Lmax = Lf ail = u0 for analysis. Figure 1 shows a simple tree structure with the relationship among the random variables where the values beside the branches indicate the corresponding probability. At the beginning of stage j, the number of survived components is n − sj−1 , and the total transferred overload is Oj = mj−1 (n − sj−1 )P , which adds Δj = mj−1 P to load of each survived components. The authors of [10] show that S follows the quasibinomial distribution and linked the asymptotic behavior of the process to a saturating form of generalized Poisson distribution to show the power-law behavior of cascade size at critical Δ load. Let p = uP0 , δj = u0j , j ≥ 0. For large values of n and enough small values of p and d such that λ = np and θ = nδ0 remain constant, the critical minimum load corresponds to λ = 1. From the branching process approximation, λ can be interpreted as the average number of failures per failure in the previous stage, and at the critical point, each failure leads to another failure on average. λ = n 1−LPmin = 1 corresponds to minimum critical load L∗min = 1 − nP . CASCADE assumes that the transferred load at each stage is distributed uniformly among all active components, neglecting the heterogeneity among different components’ roles and interactions. In Sect. 3, we introduce and analyze the interaction-CASCADE where subject to the same amount of total transferred load, the overload does not distribute among all currently survived components or a selected subset with a given size [11] but to specific neighbors according to an interaction graph.
222
A. Ghasemi et al.
Fig. 1. consecutive stages of CASCADE model
2.2
Interaction Graph
A failure interaction graph is an abstract graph different from the network’s physical topology, which reflects how one component’s failure affects others’ failure states. In particular, interaction graphs capture the non-local [12,13] and possible higher-order interactions [14] of failure cascading in the power networks roots at the physical constraints. Furthermore, one can follow the effect of possible change or intervention in the physical network structure, like deploying a new renewable energy source on the interaction graph and evaluating its impact on the system’s global behavior beforehand. Interaction graphs can be constructed using the electric-distance-based or data-driven methods. In the former, one can use the physics of flow distribution and its relation with the physical network topology to estimate the interactions. For example, the line outage redistribution factors (LODF) are independent of power injection and demand at the buses and indicate how the flow of each line would change after a line failure in terms of physical network topology parameters. We can use the real or simulated failure cascading data in consecutive stages or generations to learn the interactions. For example, [15] shows that the generation-dependent interaction model can successfully capture real data statistics, and [12] uses utility outage data to find a Markovian interaction graph that predicts the transitions between generations of cascading line outage. The authors of [16] deployed machine learning schemes to infer the failure interaction graph and its statistical properties for power networks. In the following, we assume that the failure interaction graph of the network is given by a (weighted) directed graph G. The set of node k out-neighbors in G is denoted by Nkout and indicates the components that incur overload if k fails. Using this abstraction, we can study how different possible interaction structures may affect the cascade behavior in a high-level model assuming G as a complete, Erd˝ os-R´enyi (ER), or scale-free (SF) graphs. CASCADE assumes complete homogeneous interactions. In interaction-CASCADE, we consider the ER graph with limited heterogeneity and the power-law graph with heterogeneous in the components’ out- and in-degrees. In particular, the power-law graph reflects a system with a few components with large out-degree, i.e., which affect many other components after failure, or in-degree, i.e., are affected by the fail-
Interaction-Dependent
223
ure of many components. Recall that one explanation for the origins of cascade relates its complex behavior to large demands supplied (influenced) by many other small generations [5].
3
Interaction-CASCADE Model
In the Interaction-CASCADE model, we assume 1) the total amount of overload after each component failure is the same as the CASCADE model, i.e., failure of component k at stage j incurs the total (N − Sj−1 )P overload, 2) this overload divided equally among the active, outgoing neighbors of k, Nkout given by the interaction graph, 3) the initial load of each component is independent of its outand in-degrees in the interaction graph. Therefore, the incurred overload after the failure of component k at stage j to its active out neighbors is Δjk =
P P |Aj | = , j ≥ 1, |Nkout Aj | αjk
(1)
where Aj = N − Sj−1 is the set of active components at stage j, αjk = |Nkout Aj | ≤ 1 is the fraction of affected components after failure of k, and |A| Aj denote the cardinality of set A. Algorithm 1 describes the Interaction-CASCADE model. Algorithm 1. Interaction-CASCADE model Input: Lmin , Lmax , Lf ail , G (interaction matrix) Output: process list of consecutive failed components 1: Initialize the load of all N components according L ∼ U (Lmin , Lmax ) 2: Lk ← LK + Δ0 , k = 1, . . . , N add the initial disturbance Δ0 to each component’s load 3: j ← 0, Mj ← ∅, S−1 ← ∅, process ← ∅ 4: for k ∈ {1, ..., N } do 5: if Lk > Lf ail then 6: Mj = Mj ∪ {k} 7: end if 8: end for 9: while Mj = ∅ do continue until there is no overload component 10: process.append(Mj ) 11: Sj = Sj−1 ∪ Mj 12: Aj = N − Sj survived(active) components 13: for k ∈ Mj do P |A | 14: Δjk ← out j overload share of active out-neighbors of k Aj Nk out Aj 15: La ← La + Δjk , a ∈ Nk add overload share 16: end for 17: j ← j + 1, Mj ← ∅ 18: for k ∈ Aj do check for new overload components 19: if Lk > Lf ail then 20: Mj = Mj ∪ {k} 21: end if 22: end for 23: end while
In CASCADE, αjk = 1, Δjk = P, ∀j, k and we can analyze the number of overload components at each stage based on the total overload of the previous
224
A. Ghasemi et al.
stage Δj . That is, the survived components’ loads at stage j ≥ 1 are distributed j−1 Δh , u0 and all components which their load are in the according to L ∼ U h=0
(u0 − Δj , u0 ) will fail. See Fig. 1. In Interaction-CASCADE, however, the failure of component k only affects the load of Nkout . Therefore, we follow the impact of each component failure on survived components’ load distribution separately. We first show that the conditional marginal distributions of M1 in both models are the same. Note that M1 = 0 ⇐⇒ M0 = 0, n and the probability of moving to stage j = 1 is the same for the same initial disturbance. Let 0 < M0 = m0 < n. Recall that the components’ loads are independent of their out-degrees in the interaction graph. Therefore, each failure uniformly and randomly affects a subset of survived components from the previous stage. Let u1 = u0 − Δ0 . After applying the overload of the first failure, k = 1, the load of a remaining survived component either 1) does not change with probability 1−α11 if that component does not belong to Nkout , 2) is increased by Δ11 but 11 , 3) is will not overload due to the incurred overload with probability α11 u1 −Δ u1 increase by Δ11 and will overload due to the incurred overload with probability . The load distributions of survived components from cases 1 and 2 are α11 Δu11 1 U (0, u1 ) and U (Δ11 , u1 ), respectively. Also, the number of overload components 11 ). See Fig. 2. M11 ∼ B(n − m0 , α11uΔ 1 We now apply the second overload transfer, k = 2, which changes the load distribution of survived components and may lead to new overloads in two branches. In the first branch, the overloaded components affected by the first Δ12 contribute to new failure with distribufailure will fail with probability uα112−Δ 11 α11 α12 Δ12 1 ). Note that the probability beside each branch is tion M12 ∼ B(n−m0 , u1 unconditional. Next, the new failure contribution for those that are not affected 12 Δ12 2 ∼ B(n − m0 , (1−α11u)α ). Since these binoby the first failure follows M12 1 mial distributions are independent, the contribution of the second failure in new 1 2 12 + M12 ∼ B(n − m0 , α12uΔ ). The load distribution of the failures is M12 = M12 1 survived components will follow one of the four possible uniform distributions as shown in Fig. 2. Following the same approach and by adding the 2k possible scenarios for the k 1k ). Therefore, the total contribution failure, we find that M1k ∼ B(n − m0 , α1kuΔ 1 m 0
α1k Δ1k of the m0 failures is M1 ∼ B n − m0 , u0 −Δ0 . Using α1k Δ1k = P and m 0 α1k Δ1k = m0 P = Δ1 , we can conclude that conditional distribution of M1
k=1
k=1
is the same as CASCADE. We now move to the next stage, j = 2, to show that for a given m0 and m1 , the marginal distributions of M2 in CASCADE and interaction-CASCADE are the same with an illustrative example assuming m0 = 1 and m1 = 2. See Fig. 3. Note that the reasoning can be extended for the general case. The load of n − s1 survived components at stage 2 is either distributed according to U (Δ11 , u1 ) or U (0, u1 ). Excluding the fraction of failed components
Interaction-Dependent
225
Fig. 2. Tree diagram of tracing the overloaded and the load distribution of survived components by applying m0 = 3 failures in stage j = 1 where u1 = u0 − Δ0 .
at stage 1, the fraction of the first segment is the fraction of the second segment is
u1 −Δ11 u1 Δ 1−α11 u11 1
α11
1−α11 Δ 1−α11 u11 1
=
=
α11 (u1 −Δ11 ) u1 −α11 Δ11 .
(1−α11 )u1 u1 −α11 Δ11 .
Similarly,
In the general case
we can compute the fraction of 2m0 possible segments. Next, we investigate the impact of each m1 failure on the fraction of overloaded and the load distribution of the survived components consequently as in stage j = 1. In particular, the overload contributions of the first failure in α21 Δ21 1 2 and M21 ∼ B n − s1 , uα111−α ∼ segments one and two are given by M21 11 Δ11 11 )α21 Δ21 . Thus the overload contribution of the first failure is B n − s1 , (1−α u1 −α11 Δ11 21 Δ21 . Updating the load distribution of the survived M21 ∼ B n − s1 , u1α−α Δ 11 11 components and summing the contribution of possible four branches, we can find 22 Δ22 the overload contribution of the second failure as M22 ∼ B n − s1 , u1α−α . 11 Δ11 Finally by adding the fraction of overloaded components due to m1 failures m 1 α2k Δ2k m 1 k=1 we have M2 ∼ B n − s1 , u1 −α11 Δ11 . Since Δ2 = α2k Δ2k = m1 P and k=1 Δ2 as in the CASCADE model. Δ1 = α11 Δ11 we have M2 ∼ B n − s1 , u0 −Δ 0 −Δ1 At the end of stage j = 2, the survived n − s2 components are divided into 2m0 +m1 segments, and we can use the same argument to show that the subsequent conditional marginal distribution of Mj for the CASCADE and interaction-CASCADE are the same. We conclude this section with a few remarks. First, note that the conditional marginal distribution of Mj for the CASCADE and interaction-CASCADE are the same if the interaction graph is weighted where the incurred overload after the failure of component k at stage j does not evenly distribute among its current where wkh survived outgoing neighbors h ∈ Nkout Aj but in a weighted manner is the fraction of overload which overflow to h ∈ Nkout and wkh = 1. The h∈Nkout
reason is that this scenario is effectively equivalent to changing the in-degree distribution of the interaction graph, which does not change the conditional
226
A. Ghasemi et al.
marginal distribution. Second, we assume that the initial load of components is independent of the degree distribution of the interaction graph. However, in real systems, we usually consider more spare capacity for the more influential components, whose failure affects many other components (high out-degree components) or those affected by many (high in-degree components). We study the behavior of interaction-CASCADE in these scenarios in the next section. Third, note that even though the conditional marginal distribution of Mj is the same in both models, the joint distribution may differ, meaning that the process dynamics could be completely different based on the interaction graph structure.
Fig. 3. Tree diagram of tracing the overloaded and the load distribution of survived components by assuming m0 = 1 and applying m1 = 2 failures in stage j = 2 where u1 = u0 − Δ0 .
4
Numerical Studies
This section presents the numerical study of interaction-CASCADE and compares its behavior with CASCADE. We assume n = 1000 and P = Δ0 = 0.0004. The predicted minimum critical load in CASCADE is L∗min = 0.6, corresponding ¯ = 1+Lmin = 0.8 in [10]. We first compare the to the critical average load of L 2 process behavior with the complete (CASCADE), ER (ER-CASCADE), and power-law scale-free (SF-CASCADE) interaction graphs. For the ER and SF graphs, we assume m = 50000 total interaction links, meaning that the graph has mean degree c = m n = 50, and each component, on average, interacts with 5% other components. Note that despite the complete and ER graphs, which have homogeneous degrees distribution, the scale-free graphs allow heterogeneous outand in-degrees. We use the static model with m links and desired power-law out- and indegree distribution, i.e., p(k) ∼ k −γ , γ > 2 for large k with finite-size correction [18]. Note that as γ → 2, the corresponding out- or in-degree is more heterogeneous, and as γ → ∞, the corresponding out- or in-degree is more like the ER graph.
Interaction-Dependent
227
Here, we set 2 < γout = γin < 3, indicating that there are a few hubs with large out-degrees whose failures propagate to many others and a few hubs with large in-degrees whose will be affected by many others in the interaction graph. 4.1
Interaction Independent Load Distribution
We run and evaluate the behaviors of CASCADE, ER-CASCADE, and SFCASCADE with γout = γin = 2.5 for the 300 identical initial random loads. Figure 4a shows the complementary cumulative distribution function (CCDF) for cascade size, S, and length, J, for Lmin = 0.5, 0.6, 0.7. At the minimum critical load of Lmin = 0.6, cascade size and length show a heavy tail distribution. The cascade length below and above the minimum critical load is limited, i.e., the process stops, or all components fail early with high probability. The fitted power-law model for the cascade size has the exponent αS 1.5. The cascade length has the exponent αJ 1.6 [17]. The left panel of Fig. 4b shows the average and range of standard deviation of number failures at consecutive stages of the processes at the critical load for three studied interaction graphs indicating different dynamics. Note that for small values of j 10, when we have sufficient representative samples for each scenario, the numerical mean of Mj s are the same. Also, the power-law interactions cause more significant standard deviations and less predictable behavior in different stages of cascades compared to the complete and ER interaction graphs. The right panel of Fig. 4b shows the fraction of failed components against the minimum load, indicating that the minimum critical loads for the three scenarios are the same. 4.2
Interaction-Dependent Load Distribution
One expects that by engineering, the more influential components with higher out-degree should have more slack capacity and less normalized load to mitigate the failure propagation to fewer components after failure. Similarly, if more spare capacity is assigned to the components with higher in-degree to decrease their initial normalized load, they will tolerate more load before failure, which may stop the cascade at earlier stages. In interaction-CASCADE, the contribution of each failure on triggering other failures is independent of its load and out-degree. Therefore, we expect that as far as the out- and in-degree distributions of the interaction graph are independent, assigning a smaller initial normalized load to components with higher out-degree does not change the gross behavior of the process. In contrast, the in-degree of each component determines the probability by which it gets overloaded from the current failures. In particular, if we follow a directed link of a randomly chosen component, the in-degree of target components will be skewed towards components with higher in-degrees according to the excess degree distribution of the interaction graph. We run interaction-CASCADE with power-law networks varying their degree distribution heterogeneity (γin = γout = 2.2, 3.0) with out- and in-degree aware
228
A. Ghasemi et al.
Fig. 4. Comparison the CASCADE (solid lines) and ER-CASCADE (dash lines), and SF-CASCADE (dotted lines), (a) CCDF of the cascade size and cascade length for Lmin = 0.5, 0.6, 0.7, where solid circles at the end of cascade size curves show total failure. (b) average and the range of standard deviation of failures at consecutive stages for Lmin = 0.6 and the fraction of average total failures for different values of Lmin .
initial load assignment and compare the results with random load assignment. In degree-aware load assignment, we sort the initial loads in ascending order and the components according to their out- or in-degrees in descending order and assign the smaller loads to components with a higher degree. Figure 5a shows that the minimum critical load does not depend on the components’ out-degrees but shifts to higher values with in-degree aware load assignment, as we expected. The reason for the higher-critical minimum load for the power-law interaction graph is rooted in the more heterogeneous nature of the corresponding excess degree of the interaction graph. Let’s pick a node randomly and follow one of its edges. The excess degree refers to the target node’s degree (more exactly minus one) [19]. It is well known that the excess degree and degree are asymptotically the same for ER graphs and follows Poisson distribution. Also, if the in-degree follows the power-law with parameter γ, the excess in-degree distribution will follow the power-law asymptotically with parameter γ − 1, i.e., the excess in-
Interaction-Dependent
229
degree is more heterogeneous. See Fig. 5b. Therefore, in the case of power-law interaction, the current overload is skewed toward nodes with higher excess indegree nodes with lower loads. However, for the ER graph, the load of the target node has a typical excess in-degree that is concentrated near the mean degree with limited variance. Figure 5c shows the target node’s load distribution when smaller loads are assigned to nodes with higher in-degrees. For uniform load sample U (Lmin , 1) ¯ = 1+Lmin , i.e., for U (0, 1) load samples, and random load assignment we have L 2 ¯ = 0.5. We compute the average load numerically for Fig. 5c, and we find that L ¯ ER = 0.46 for ER graph and the average observed load skewed from 0.5 to L ¯ power−law = 0.32 for power-law with γ = 2.2. This decrease in the average of L the target component load when the average load of all components remains the same effectively increases the minimum critical load.
Fig. 5. Interaction-aware components’ loads effect on the minimum critical load (a) with random, out- and in-degree aware initial loads, (b)The degree and excess degree distribution of ER and power-law(γ = 2.2), (c) the load distribution of the target component in the ER and power-law interaction graphs
5
Conclusion and Future Works
CASCADE is a load-dependent probabilistic model to describe how an initial failure may lead to cascading in power systems with homogeneous interactions. We present and evaluate interaction-CASCADE to consider the heterogeneity in interaction structure among the system’s components and how it affects the average critical load and process dynamics. We discuss how the in-degree aware load distribution, in particular for power-law interactions, can shift the minimum critical load. Future work will consider more aspects of interaction-CASCADE to better understand the nature of failure cascading in networked systems. Acknowledgment. The work of A. Ghasemi was supported by the Alexander von Humboldt Foundation (Ref. 3.4 - IRN - 1214645 -GF-E) for his research fellowship at the University of Passau in Germany.
230
A. Ghasemi et al.
References 1. Bialek, J., et al.: Benchmarking and validation of cascading failure analysis tools. IEEE Trans. Power Syst. 31(6), 4887–4900 (2016) 2. Carreras, B.A., Newman, D.E., Dobson, I., Poole, A.B.: Evidence for self-organized criticality in a time series of electric power system blackouts. IEEE Trans. Circ. Syst. I Regul. Pap. 51(9), 1733–1740 (2004) 3. Dobson, I., Carreras, B.A., Lynch, V.E., Newman, D.E.: Complex systems analysis of series of blackouts: cascading failure, critical points, and self-organization. Chaos Interdiscip. J. Nonlinear Sci. 17(2), 026103 (2007) 4. Carlson, J.M., Doyle, J.: Highly optimized tolerance: robustness and design in complex systems. Phys. Rev. Lett. 84(11), 2529 (2000) 5. Nesti, T., Sloothaak, F., Zwart, B.: Emergence of scale-free blackout sizes in power grids. Phys. Rev. Lett. 125(5), 058301 (2020) 6. Guo, H., Zheng, C., Iu, H.H.-C., Fernando, T.: A critical review of cascading failure analysis and modeling of power system. Renew. Sustain. Energy Rev. 80, 9–22 (2017) 7. Motter, A.E., Lai, Y.-C.: Cascade-based attacks on complex networks. Phys. Rev. E 66(6), 065102 (2002) 8. Watts, D.J.: A simple model of global cascades on random networks. Proc. Natl. Acad. Sci. 99(9), 5766–5771 (2002) 9. Qi, J., Sun, K., Mei, S.: An interaction model for simulation and mitigation of cascading failures. IEEE Trans. Power Syst. 30(2), 804–819 (2014) 10. Dobson, I., Carreras, B.A., Newman, D.E.: A loading-dependent model of probabilistic cascading failure. Probab. Eng. Inf. Sci. 19(1), 15–32 (2005) 11. Dobson, I., Carreras, B.A., Newman, D.E.: Probabilistic load-dependent cascading failure with limited component interactions. In: 2004 IEEE International Symposium on Circuits and Systems (IEEE Cat. No. 04CH37512), vol. 5, p. V. IEEE (2004) 12. Zhou, K., Dobson, I., Wang, Z., Roitershtein, A., Ghosh, A.P.: A Markovian influence graph formed from utility line outage data to mitigate large cascades. IEEE Trans. Power Syst. 35(4), 3224–3235 (2020) 13. Hines, P.D.H., Dobson, I., Rezaei, P.: Cascading power outages propagate locally in an influence graph that is not the actual grid topology. IEEE Trans. Power Syst. 32(2), 958–967 (2016) 14. Ghasemi, A., Kantz, H.: Higher-order interaction learning of line failure cascading in power networks. Chaos Interdisc. J. Nonlin. Sci. 32(7), 073101 (2022) 15. Qi, J.: Utility outage data driven interaction networks for cascading failure analysis and mitigation. IEEE Trans. Power Syst. 36(2), 1409–1418 (2020) 16. Ghasemi, A., de Meer, H., Kantz, H.: Interaction graph learning of line cascading failure in power networks and its statistical properties. Energy Inform. 6(Suppl. 1), 17 (2023). https://doi.org/10.1186/s42162-023-00285-0 17. Alstott, J., Bullmore, E., Plenz, D.: powerlaw: a Python package for analysis of heavy-tailed distributions. PLoS ONE 9(1), e85777 (2014) 18. Cho, Y.S., Kim, J.S., Park, J., Kahng, B., Kim, D.: Percolation transitions in scale-free networks under the Achlioptas process. Phys. Rev. Lett. 103(13), 135702 (2009) 19. Newman, M.: Networks. Oxford University Press (2018)
Detecting Critical Streets in Road Networks Based on Topological Representation Masaki Saito1 , Masahito Kumano2 , and Masahiro Kimura2(B) 1
2
Division of Electronics and Informatics, Graduate School of Science and Technology, Ryukoku University, Otsu, Japan Faculty of Advanced Science and Technology, Ryukoku University, Otsu, Japan {kumano,kimura}@rins.ryukoku.ac.jp Abstract. We provide a novel problem of analyzing geographical road networks from a perspective of identifying critical streets for vehicular evacuation. In vehicular evacuation behaviors during disasters and emergency situations, the shortest distance routes are not necessarily the best. Instead, routes that are easier to traverse can be more crucial, even if they involve detours. Furthermore, evacuation destinations need not be limited to conventional facilities or sites; wider and better maintained streets can also be suitable. Therefore, we focus on streets as basic units of road networks, and address the problem of finding critical streets in a geographical road network, considering a scenario in which people efficiently move from specified starting intersections located around their residences to designated goal streets, following the routes of easiest traversal. In this paper, we first model a road network as a vertex-weighted graph obtained from its topological representation, where vertices and edges represent streets and intersections between them, respectively. The weight of each vertex reflects its ease of traversal. Next, we extend a recently introduced edge-centrality measure, salience, for our problem, and propose a method of detecting critical streets based on the vertex-weighted graph of topological representation by incorporating the notion of damping factor into it. Using real-world road network obtained from OpenStreetMap, we experimentally reveal the characteristics of the proposed method by comparing it with several baselines. Keywords: Critical Streets · Evacuation · High-Salience Skeleton Road Network Analysis · Topological Representation
1
·
Introduction
As the collection of large-scale geographical data, including transportation and road networks in urban areas, has progressed, researchers have been utilizing network science methodologies to analyze these geographical networks [4]. Conventionally, a geographical road network is modeled as an edge-weighted graph obtained from its geometric representation in network science, where vertices and edges represent intersections and road segments between adjacent intersections, c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 231–242, 2024. https://doi.org/10.1007/978-3-031-53503-1_19
232
M. Saito et al.
respectively. The weight of each edge indicates the real distance of its corresponding road segment. Previous work employed the edge-weighted graphs of geometric representation, and analyzed the statistical properties of geographical road networks using traditional centrality measures such as degree, betweenness, and clustering coefficient [1]. However, Ma et al. [9] argued that from the perspective of predicting human behavior, it is essential for road network analysis to center around recognizable entities for people, such as streets, instead of fragmented road segments. Although there are various road categories such as “street”, narrower “way”, and broader “boulevard”, we refer to all these road categories as street for simplicity. Following this definition, it allows us to consider streets for vehicle evacuation both in terms of their narrowness and difficulty, as well as their breadth and ease of passage. This perspective holds the potential to enhance road network analysis. While there have been a handful of studies focusing on streets for road network analysis in the past [1,9], to our knowledge, there have been no studies that specifically address the ease of passage on streets. In network science, while many studies have been conducted to identify important edges in a graph, Grady et al. [6] pointed out that conventional methods typically rely on external or arbitrary parameters to extract meaningful structural features. In response, they introduced the salience of an edge for a graph and investigated the difference between edge-salience and edgebetweenness for a variety of networks from transportation, biology, sociology and economics. As a result, while the edge-betweenness distribution exhibited exponential trends, the edge-salience distribution displayed bimodality. The peak region containing edges with high-salience in the bimodal distribution allows us to easily identify important edges. Also, they provided the concept of highsalience skeleton (HSS) and demonstrated its effectiveness for various real networks. For example, the HSS effectively summarized global air traffic networks and successfully identified the infectious pathways in a social network for stochastic epidemic models. Due to its inherent potential, the HSS is expected to be applicable across a broader range of fields and challenges. In this paper, we provide a novel problem of analyzing geographical road networks from a perspective of identifying critical streets for vehicular evacuation. When evacuating by vehicle during emergencies, the routes with the shortest distance are not always the most beneficial. Instead, there are cases where routes that offer easier traversal, even if they are longer, are more advantageous. Furthermore, evacuation destinations need not be restricted solely to conventional facilities or sites; wider and better maintained streets might also be viable options. Therefore, we focus on streets as basic units of road networks, and address the problem of identifying critical streets in a geographical road network, considering a scenario in which people efficiently move from specified starting intersections located around their residences to designated goal streets, following the routes of easiest traversal. In this paper, we first model a road network as a vertex-weighted graph obtained from its topological representation, where vertices and edges represent streets and intersections between them,
Detecting Critical Streets in Road Networks
233
respectively. The weight of each vertex reflects its ease of traversal. Next, we extend a recently introduced edge-centrality measure, salience, for our problem, and propose a method of detecting critical streets based on the vertex-weighted graph of topological representation by incorporating the notion of damping factor into it. We employ real-world road network obtained from OpenStreetMap, and experimentally investigate the characteristics of the proposed method by comparing it with several baselines.
2
Related Works
In recent years, there has been a growing interest in applying complex network science methods to geographical networks in the fields of geography, regional science, and urban science [4]. For instance, Monstis et al. [10] have statistically analyzed the relationship between commuting flows and the structure of a weighted railway network, where towns are represented as nodes and commuting flows are represented as weighted links. They examined the degree distribution as a centrality measure, demonstrating that they followed a power-law distribution. Furthermore, Porta et al. [11] conducted an analysis of the correlation between street-based networks and the density distribution of commercial and service activities in cities. They demonstrated a correlation between the global centrality metric Betweenness [5] of the street network and the distribution of human activities. Crucitti et al. [1] constructed road networks not only based on the conventional representation where intersections are treated as nodes and road segments between intersections as links, but also through a topological road representation where streets are considered as nodes and intersections as links, and analyzed the urban structures of various cities using four centrality measures, including Degree and Betweenness. They demonstrated that self-organized cities and planned cities exhibit different tendencies in terms of scale-free characteristics. However, to the best of our knowledge, there has been no study capturing road networks from the structural perspective of the HSS, as proposed by Grady et al. [6]. Incidentally, Ding et al. [2] have pointed out that although complex network science has contributed to research on urban transportation networks in recent years, there is a lack of comprehensive frameworks for integrating various methods. In this study, we introduce a method to integrate the HSS, which has typically been applied to conventional distance-based network representations, with a topological road representation focusing on the connectivity between streets rather than distance. Our approach uniquely tackles the previously unaddressed issue of identifying crucial streets in urban evacuation scenarios, setting it apart from prior research. It should be noted that, in road network data, there are instances where the attribute information defining a ‘street’ can be ambiguous. This presents a challenge in automatically determining the start and end points of a street from the data. While Jiang et al. [7] has tackled the challenge of constructing algorithms to extract natural streets, for simplicity, we consider the ‘Way’ elements from OpenStreetMap as ‘streets’ for our experiments.
234
M. Saito et al.
Fig. 1. Example of a geographical road network, an edge-weighted graph obtained from its geometric representation (GR), and a vertex-weighted graph obtained from its topological representation (TR).
On the other hand, research applying network science to evacuation routerelated problems is also gaining attention. Do et al. [3] and Julianto et al. [8] have addressed the problem of route selection considering lanes in their analysis of evacuation routes assuming travel by car. However, these studies focus on addressing the local problem of choosing which lanes to take in a single road with multiple lanes. In contrast, our study is fundamentally different from them as it aims to extract critical streets for vehicular evacuation within a urban road network.
3
Preliminaries
We analyze a geographical road network within a city from an evacuation perspective (see Fig. 1). Here, the road network consists of streets {Sm } and intersections {In } (see Fig. 1a), and we assume that people move from specified starting intersections to designated goal streets. In Fig. 1a, two optimal routes from a starting intersection I1 to a goal street S5 are illustrated. Conventionally, the road network is modeled as an edge-weighted graph Gg = (Vg , Eg , W Eg ) that is obtained from its geometric representation, where Vg is the set of vertices corresponding to intersections, and Eg is the set of edges corresponding to road segments that indicate the portions between two contiguous intersections along streets (see Figs. 1a and b). Also, W Eg is a weight function defined on Eg , and the weight of each edge indicates the real distance of its corresponding road segment (see Fig. 1b). Here, from the viewpoint of graph Gg , the shortest path from I1 to S5 in Fig. 1a is given by I1 , I2 , I3 , I4 (see Figs. 1a and b). Note that this represents the route of the shortest distance. On the other hand, the road network can also be modeled as a vertexweighted graph G = (V, E, W V ) that is obtained from its topological representation, where V is the set of vertices corresponding to streets, and E is the set of edges corresponding to those intersections at which two streets intersect (see Figs. 1a and c). When two streets intersect multiple intersections, we adopt one of such intersections as the edge between the two vertices (i.e., the two streets). Note that when more than two streets intersect at an intersection, the intersection can become multiple distinct edges in the graph G. Also, W V is a weight
Detecting Critical Streets in Road Networks
235
function defined on V , and the weight W V (Sm ) > 0 assigned to each vertex (i.e., each street) Sm reflects its ease of traversal; smaller values of W V (Sm ) indicate that the street Sm is more easily passable. In the experiments, we set the reciprocal of the number of lanes in street Sm as W V (Sm ). Here, from the viewpoint of graph G, the shortest path from I1 to S5 in Fig. 1a is given by S4 , S5 (see Figs. 1a and c). Note that this represents the route of the easiest traversal. In this paper, we focus on the graph G of the topological representation.
4
Detection Method
We begin by formalizing our road network analysis problem. Next, for our problem, we present a novel analysis method, which is naturally derived from the notion of HSS [6]. We also provide three alternative analysis methods for comparison purposes. 4.1
Problem Formulation
We consider the geographical road network within a specific city, RN , and focus on a situation in which people efficiently move from specified starting intersections to designated goal streets along optimal routes such as evacuation routes. For the road network RN , let I = {In } and S = {Sm } denote the sets of all intersections and streets, respectively. We fix the set of starting intersections, I start ⊂ I, and the set of goal streets, S goal ⊂ S. We can also consider the population ρ(Ii ) around each starting intersection Ii ∈ I start . Our final aim is to identify the critically important streets in the road network RN in this situation. In this paper, we utilize the vertex-weighted graph G = (V, E, W V ) obtained from the topological representation of the road network RN . Let E start ⊂ E and V goal ⊂ V denote the counterparts of I start and S goal , respectively. Then, our analysis problem is formulated as detecting the critical vertices in G under the population distribution ρ(ei ) (∀ei ∈ E start ), considering the movement of people from E start to V goal along the shortest paths in G. Here, it is normalized that ρ(ei ) = 1. ei ∈E start
4.2
Critical Vertices Detection Based on High-Salience Skeleton
For an edge-weighted graph, Grady et al. [6] introduced the notion of highsalience skeleton (HSS) by proposing the salience of each edge according the idea of identifying the average shortest-path tree from a reference vertex to the remaining vertices in the graph. We extend their work [6] for the case of our analysis problem. We introduce a new score f (vm ) for each vertex vm ∈ V \V goal , and use this score to detect the critical vertices in the graph G. We refer to this detection method as topological representation based HSS (TR-HSS). Below, we provide the definition of the score f (vm ) of a vertex vm ∈ V \V goal in TR-HSS. For each starting edge ei ∈ E start , let v + (ei ) and v − (ei ) denote the
236
M. Saito et al.
two vertices ei connects in G. For each vj ∈ V goal , we examine both the shortest paths from v + (ei ) and v − (ei ) to vj , and select the shorter ones as the shortest paths from ei to vm . Note that there can be multiple shortest paths from ei to vm in principle. In this way, we construct the shortest-path tree from ei to V goal , represented as T (ei ) = (T1 (ei ), . . . , TM (ei )). Here, M = |V | is the number of vertices in G (i.e., the number of streets in the road network RN ), and each Tm (ei ) is defined as follows: Tm (ei ) = 1 if vertex vm is part of at least one of the shortest paths from ei to V goal , and Tm (ei ) = 0 if it is not. According to our analysis purpose, Tj (ei ) is always set to 0 for all goal vertices vj ∈ V goal . Note that T (ei ) indicates which vertices are included in the most efficient routes from ei to V goal for the graph G. Similar to the case of salience [6], we introduce the score f (vm ) of each vertex vm in TR-HSS as ρ(ei ) Tm (ei ), (1) f (vm ) = C ei ∈E start
where C is the normalization constant ensuring that the maximum value of f (vm ) is 1. Next, we further enhance the TR-HSS by carefully examining whether the shortest paths from a starting edge ei ∈ E start to a goal vertex vj ∈ V goal include the shortest paths from ei to goal vertices other than vj . We believe that this type of analysis is important since reaching the nearest safe locations is significant from the perspective of evacuation. To this end, we propose appropriately reducing the importance of vertices that appear after a goal vertex other than vj along the shortest paths from starting edge ei to goal vertex vj . By introducing a damping factor p (0 ≤ p ≤ 1), we modify the definition of the shortest-path tree T (ei ) = (T1 (ei ), . . . , TM (ei )) from ei to V goal as follows: – Tm (ei ) = 1 if there is a shortest path from ei to V goal that includes vertex vm and where a goal vertex does not appear before vm . – Tm (ei ) = pk (1 ≤ k ∈ Z) if there is a shortest path from ei to V goal that includes vertex vm and where k goal vertices appear before vm , but there are no shortest paths from ei to V goal that include vm and where goal vertices appear before vm (0 ≤ < k). – Tm (ei ) = 0 if vm is not included in any of the shortest paths from ei to V goal . Once again, we assume that Tj (ei ) = 0 for all goal vertices vj ∈ V goal . Then, based on the score f (vm ) of vertex vm given by Eq. (1), the enhanced TR-HSS detects the critical vertices in G. Here, note that the enhanced TR-HSS with p = 1 is the same as the original TR-HSS. 4.3
Baseline Methods
To investigate the properties of TR-HSS in detail, we provide three baseline detection methods. Rather than focusing on the shortest-path tree T (ei ) from a starting edge ei ∈ E start to all goal vertices vj ∈ V goal , we consider using the shortest-path list
Detecting Critical Streets in Road Networks
237
from ei to V goal , represented as L(ei ) = (L1 (ei ), . . . , LM (ei )). Here, each Lm (ei ) is the number of shortest paths from ei to V goal in which vertex vm is included. Then, we employ the score f (vm ) of vertex vm that is obtained by substituting L(ei ) to T (ei ) in Eq. (1), and detect the critical vertices in G. This detection method can be equivalent to the traditional betweenness centrality, since unlike the case of graphs with homogeneous weights, shortest path between a given pair of vertices are typically unique in heterogeneous graphs with real-valued weights. Thus, we refer to the detection method as topological representation based betweenness (TR-BTW). In a similar fashion as TR-HSS, we enhance the TR-BTW by incorporating the damping factor p to reduce the significance of streets appearing after other goal streets. For the conventional analysis of the road network RN , the edge-weighted graph Gg = (Vg , Eg , W Eg ), obtained from the geometric representation of RN , is often utilized. From naturally transforming the TR-HSS and TR-BTW for the graph G of topological representation into the graph Gg of geometric representation, we can obtain two methods of detecting the critical road segments in the road network RN (i.e., the critical edges in Gg ). For our problem of detecting the critical streets in the road network RN , we leverage these two methods and identify those streets through finding the critical road segments included in them. We refer to the methods as geometric representation based HSS (GRHSS) and geometric representation based betweenness (GR-BTW). Note that the GR-HSS and GR-BTW are regarded as straightforward applications of the conventional HSS and edge-betweennes centrality to our road network analysis problem, respectively.
5
Experiments
Using real road data from two cities, we experimentally evaluate the proposed TR-HSS with damping factor p by comparing it to the baseline methods TRBTW, GR-HSS, and GR-BTW in terms of critical street detection. Furthermore, we give a detailed analysis of the critical streets detected by the TR-HSS with p = 0. 5.1
Datasets and Settings
We used OSM (OpenStreetMap) road data as the real road network data, and constructed two road network data, one for Kyoto City in Japan, which is characterized by a grid-like road network, and the other for Paris in France as an example of famous city. The road data in the target areas of Kyoto and Paris were extracted from the regions bounded by the latitudes and longitudes of the northwest and southeast corners. In OSM, each “Way” element consists of nodes corresponding to drawing points on the map and is stored as a road unit, which can include multiple intersections. For simplicity, we designated the “Way” elements as the streets and identified the intersections between Way elements based
238
M. Saito et al. Table 1. Fundamental statistics of datasets
City
Target areas: (Latitude, Longitude) # of vertices Northwestern Corner Southeastern Corner GR TR
# of edges GR TR
# of streets (Goals)
Kyoto (35.07, 135.67)
(34.93, 135.83)
71,031 41,332 88,948 71,031 915
Paris
(38.80, 2.49)
97,913 61,178 93,913 97,970 1,639
(48.91, 2.21)
Fig. 2. Distributions of street scores for the case of V goal = V and of road segment scores for the case where the set of goals consists of all road segments.
on their drawing points. However, there are instances where side roads or sidewalks run parallel to a street. We excluded such side roads and sidewalks. Furthermore, in the graph of GR, the edge weights of road segments were assigned to their actual distances calculated using the H¨ ubeni formula based on the latitude and longitude information associated with the drawing point information in OSM’s Way elements. In the experiments, all intersections were specified as the starting intersections belonging to I start . As the goal streets belonging to S goal , we adopted the streets with four or more lanes. Note that E start = E and V goal V in the topological representation G = (V, E, W V ). The population distribution ρ(Ii ) (Ii ∈ I start ) was assumed to be uniform. Table 1 summarizes the fundamental statistics of the datasets; the target areas, and the number of vertices, edges, and goal streets for both the geometric representation (GR) and topological representation (TR). 5.2
Results of Street Score Distribution
We first investigated the street score distribution for the case of V goal = V and the road segment score distribution for the case where the set of goals consists of all road segments. Figure 2 shows the results of these distributions for two cities Kyoto (see Fig. 2a) and Paris (see Fig. 2b), where the street score distribution is displayed for the TR-HSS and TR-BTW, and the road segment score distribution is presented for the GR-HSS and GR-BTW. We see that both the street score distribution for the TR-HSS and the road segment score distribution for the GR-HSS are bimodal. Note that this kind of bimodal property for
Detecting Critical Streets in Road Networks
239
Fig. 3. Distribution of street scores for the TR-HSS and TR-BTW.
salience distribution is commonly observed in various real network data, as earlier mentioned in Sect. 1 (see [6]). On the other hand, we see that both the street score distribution for the TR-BTW and the road segment score distribution for the GR-BTW exhibit an exponentially decreasing nature. Note that this kind of property is typical for the betweenness distribution in real network data (see [6]). These results imply that the properties of salience and betweenness distributions also hold for geographical road networks, regardless of their topological and geometric representations. We next investigate the effect of the damping factor p for the TR-HSS and TR-BTW for the two cities (see Fig. 3), where the streets designated as goals possess four or more lanes. As for the results of the TR-BTW (see Fig. 3a and b), the introduction of the damping factor p did not significantly alter the distribution trend. On the other hand, as shown in Fig. 3c and d, the bimodality for the TR-HSS with p = 1 was still present, albeit less pronounced. However, when p became less than 1, the bimodal characteristics vanished rapidly, and the characteristics of the HSS distribution began to align with the trends observed in the betweenness distribution (see Fig. 3a and b). These trends yielded equivalent results for both Kyoto and Paris. However, even if the score distributions are similar, it does not necessarily mean that the detected streets are similar. Next, we will assess the overlap of critical streets identified by methods to be analyzed using the Jaccard index.
240
M. Saito et al.
Fig. 4. Comparison results of the TR-HSS with p = 0 and that with other values of p (p = 0.25, 0.5, 1) in terms of critical street detection.
Fig. 5. Comparison results of the TR-HSS with p = 0 and the baseline methods in terms of critical street detection.
5.3
Comparison Results of Critical Street Detection Methods
Using the Jaccard index, we measured how similar the critical streets detected by the TR-HSS with p = 0 are to those detected by the TR-HSS with other values of p (see Fig. 4). As p increases, the difference in critical street detection between the TR-HSS with p = 0 and that with other values of p grows. This suggests that the extracted critical streets are more dependent on the value of p as they rank higher. Moreover, the effect of p varies across cities. These results imply that the introduction of p is crucial for extracting critical streets. Furthermore, using the Jaccard index, we measured how similar the critical streets detected by TR-HSS with p = 0 are to those detected by each of the TR-BTW with p = 0, GR-HSS, and GR-BTW. Here, the Basic stands for the GR-HSS in the case of S goal = S. Figure 5 show the results. We can see that for the TR-HSS with p = 0, there is almost no overlap with all baselines at top ranks. This suggests that the critical streets detected by the TR-HSS with p = 0 are significantly distinct from those detected by the baseline methods. Figure 6 displays a portion of the Kyoto City map. The city center of Kyoto is situated beyond the upper left of this map. A mountain in the middle of the
Detecting Critical Streets in Road Networks
241
Center of the Kyoto city
Fig. 6. Example of critical streets S1 , S2 , S3 , and S4 detected by the TR-HSS with p = 0 but not detected by the baseline methods.
map separates two regions of Kyoto City. The thick solid lines represent the goal streets with four or more lanes. Note that on the map, the “Goal Street” on the right side is a bypass with fewer access roads. The streets S1 , S2 , S3 , and S4 are among the top 50 critical streets detected only by the TR-HSS with p = 0. Note that S1 is a tunnel that pierces through the mountain, S2 is a road containing a junction between underground and surface, S3 and S4 are part of a connection road leading to a highway. By using narrower streets, we can identify several routes that connect the region on the right side of the map to the center of Kyoto City in a shorter distance than using street S1 . Note that there are many goal streets around the center of Kyoto City. It is known that the street S1 is used as a detour route when heading from the area on the right to the goal in the upper left on the map. Here, we emphasize that our approach identified these critical streets that the baseline methods failed to detect. These results demonstrate the significance of our proposed method.
6
Conclusion
We provided a novel problem of analyzing geographical road networks from a perspective of identifying critical streets in the context of vehicular evacuation during disasters and emergency situations. In evacuation actions by vehicles, we posited that the shortest distance to the destination is not necessarily the optimal criterion. Additionally, the accessibility to the evacuation site is equally vital. Also, we considered that evacuation destinations don’t necessarily have to be conventional facilities or sites; wider and better maintained streets can also be relevant. Therefore, we focused on streets as basic units of road networks, and addressed the problem of finding critical streets in a geographical road network. Moreover, we considered a situation where people efficiently move from specified starting intersections to designated goal streets via the most accessible routes. In this paper, we modeled a road network as a vertex-weighted graph obtained from its topological representation, where vertices and edges represent streets
242
M. Saito et al.
and intersections between them, respectively. The weight of each vertex reflects its accessibility. Then, we extended the edge-centrality measure, salience, and proposed a method of detecting critical streets based on the vertex-weighted graph of topological representation by incorporating the notion of damping factor into it. Using real-world road network obtained from OpenStreetMap, we experimentally revealed the characteristics of the proposed method by comparing it with several baselines. Our future work includes expanding our investigations to numerous other cities and further applying and enhancing our proposed method to address additional real-world challenges. Acknowledgements. This work was supported in part by JSPS KAKENHI Grant Number JP21K12152.
References 1. Crucitti, P., Latora, V., Porta, S.: Centrality in networks of urban streets. Chaos 16(015113), 1–9 (2006) 2. Ding, R., et al.: Application of complex networks theory in urban traffic network researches. Netw. Spat. Econ. 19, 1281–1317 (2019) 3. Do, M., Noh, Y.: Comparative analysis of informational evacuation guidance by lane-based routing. Int. J. Urban Sci. 20(Suppl. 1), 60–76 (2016) 4. Ducruet, C., Beauguitte, L.: Spatial science and network science: review and outcomes of a complex relationship. Netw. Spat. Econ. 14(3–4), 297–316 (2014) 5. Freeman, L.C.: A set of measures of centrality based on betweenness. Sociometry 40(1), 35–41 (1977) 6. Grady, D., Thiemann, C., Brockmann, D.: Robust classification of salient links in complex networks. Nat. Commun. 3, 864 (2012) 7. Jiang, B., Zhao, S., Yin, J.: Self-organized natural roads for predicting traffic flow: a sensitivity study. J. Stat. Mech. Theor. Exp. 2008(07), P07008:1–P07008:23 (2008) 8. Julianto, Mawengkang, H., Zarlis, M.: A car flow network model for lane-based evacuation routing problem. J. Phys. Conf. Ser. 1255(1), 012040 (2019) 9. Ma, D., Omer, I., Osaragi, T., Sandberg, M., Jiang, B.: Why topology matters in predicting human activities. Environ. Plan. B Urban Anal. City Sci. 46(7), 1297– 1313 (2018) 10. Montis, A.D., Barthelemy, M., Chessa, A., Vespignani, A.: The structure of interurban traffic: a weighted network analysis. Environ. Plann. B. Plann. Des. 34(5), 905–924 (2005) 11. Porta, S., et al.: Street centrality and densities of retail and services in Bologna, Italy. Environ. Plan. B 36(3), 450–465 (2009)
Transport Resilience and Adaptation to Climate Impacts – A Case Study on Agricultural Transport in Brazil Guillaume L’Her1 , Amy Schweikert2 , Xavier Espinet3 , Lucas Eduardo Araújo de Melo4 , and Mark Deinert1,2(B) 1 Department of Mechanical Engineering, The Colorado School of Mines, Golden, CO, USA
[email protected]
2 Payne Institute for Public Policy, The Colorado School of Mines, Golden, CO, USA 3 The World Bank, Washington, DC, USA 4 Department of Transportation Engineering, Escola Politécnica, University of São Paulo, São
Paulo, Brazil
Abstract. The disruption of transportation systems caused by natural hazards in one region can have significant consequences on the distribution of agricultural products and their export. In various regions of the world, climate change is expected to increase the likelihood of multiple natural hazards, such as landslides or floods. Being able to model how perturbations to transportation networks affect critical export routes is an important step toward making the system more resilient. Here, we analyze how disruptions to the Brazilian soybeans transportation network would impact export economics. We show that the impact to the Brazilian market can be important, with most of the main routes showing an impact of more than 10% on costs. This in turn can have a significant impact on the worldwide markets. We also show that mitigation measures can and should be taken to adapt to the network weaknesses, especially in the face of climate change. Keywords: Resilience · Climate Adaptation · Transport
1 Introduction An efficient transportation system underpins economic activity, including the imported and exported goods. In many regions, especially rural ones, electrical transmission infrastructure is also sited along roads which increases their importance [1]. Roads also play a critical role in service delivery that affects multiple sectors (e.g. agriculture, power, medical, etc.) and the importance of this type of interconnectivity is widely recognized [2]. Natural hazards play an outsized role in disrupting critical infrastructure, networks and this is expected to increase with climate change [3, 4]. Brazil is the global leader in soybean exports (over 83 million metric tons in 2020) and a key driver of cost and demand is the related cost of transportation [5]. The transportation system in Brazil makes use road, air, river, rail, and coastal transit. Key assets in these © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 243–250, 2024. https://doi.org/10.1007/978-3-031-53503-1_20
244
G. L’Her et al.
systems include the underlaying networks (e.g., road, rail, and waterway), cabotage, as well as points of transfer that include airports, rail terminals, and ports. Of these, road infrastructure dominates, accounting for 9 percent of passenger trips and over 60 percent of cargo in Brazil [6]. The large and diverse geography of the country results in a range of climate and natural hazard related stresses and economic activity. More than 80 percent of the population of Brazil lives in urban areas, with an expected increase to above 90 percent by 2050 [7]. Many of the urban areas are coastal, including the capital of Rio de Janeiro and other large cities such as Sao Paulo. Thus, determining the key vulnerabilities of the transportation network, and most critical threats related to climate change and natural hazards, varies by geography and metric. National annual losses from natural disasters in Brazil are estimated at $3.9 billion, due to rapid urbanization in areas prone to impacts from hazard events [8]. Between 1991–2010, 74 percent of deaths related to natural hazards were related to flash floods and landslides induced by intense rainfall events. In 2011 alone, Rio de Janeiro saw losses above 1 percent of State GDP and over 1,000 deaths due to flooding, landslides, and mudslides [8]. The largest number of disasters in the North region of Brazil are caused by excess rainfall, although landslides historically occur in the Southeast and South regions of the country. The regions the most impacted by natural hazards are in the Southeast, with over 40% of all damages to infrastructure occurring in this region [6]. Flooding has impacted most major municipalities as well as the western and central Amazon. Conversely, drought has also been a key driver of hazard-related damages, affecting almost 80 million people and causing over 11 billion in damages. Recent economic losses resulting from four major flooding and landslides events totaled over USD$5 billion, with nearly USD$1 billion in damaged transportation infrastructure [8]. Climate change impacts to existing natural hazard stresses include an increase in the incidence of drought, as well as increases in annual average temperatures and related energy demand. Sea level rise may also impact coastal regions [3, 4]. Impacts to transportation assets can have direct implications for other sectors. This includes the import and export of critical goods. For example, in 2019, imports of petroleum averaged 589,000 barrels per day alongside imports of natural gas and other energy fuels [9]. Exports of a single commodity, soybeans, accounted for $26.1 billion USD, the largest export in Brazil (2019) [10]. Nearly 83 million metric tons of soybeans were exported via the transportation network in 2020, with the majority of transit occurring by truck. Investments in the transportation network have been partially responsible for a reduced cost of moving goods by 24% from 2020 to 2019 [5]. However, vulnerability to hazards, especially including landslide risks exacerbated by flooding and intense precipitation, can damage the network and require long and expensive detours. Road damages require increased costs to fix the infrastructure. Recent records indicate that single event repairs can range from just over $2,000 USD to over $3 million USD [6], but damages also result in loss of service provision and potentially costly detours of people and goods moving along the damaged routes. To assess climate change impacts to transportation assets in Brazil, and adaptation options, it is imperative to first understand the current risks and their potential impacts. Evaluating two of the largest current risks– landslide [11] and flood [12]–can show the exposure of key assets. Disruptions from
Transport Resilience and Adaptation to Climate Impacts
245
natural hazards and climate change will vary by location, hazard, and infrastructure type. In recent years, techniques from the study of complex systems have been used to help understand networked infrastructure systems. Network analysis methods like droplink assessments with criticality metrics, have found broad application [2, 3] Here, we assess the asset-level exposure of transportation assets in Brazil to flooding, landslide, intense precipitation and climate change, in the context of the soybean industry. We use a network perturbation approach and compute the ‘cost’ of losing transportation network segments relative to a base case.
2 Methods The exposure of the transport infrastructure in Brazil to various hazards was computed. Airports, ports, railways, and waterways were considered. A geospatial overlay algorithm was developed to obtain the number of assets in each infrastructure category exposed to significant damage levels from hazards. Costs were derived using local estimated construction and re-construction rates, either per kilometer or per unit [13–16]. To assess the potential impacts of existing and future climate stressors to Brazil’s transportation network, we evaluate the transit links at highest risks by performing a droplink analysis of 33 major export routes for soybean, a key commodity. Drop-link analysis is a systematic network examination technique which evaluate the consequences incurred by the deliberate disconnection or failure of individual or multiple communication links within a network infrastructure. In this work, we disconnect every exposed network link one at a time and assess the impact of the origin-destination pairs. We focused on road transportation routes exposed to landslides [11], flooding [12], and future intense precipitation projected under climate change in the 2040 decade (CMIP5 models) [17]. The intense precipitation metric measures the maximum annual 5-day sum of precipitation (“wettest five-day period”). The results presented for exposure are the increase in intense precipitation on annual average basis for the 2040 decade, relative to the annual average value for a thirty-year historical baseline (1970–1999). The CMIP5 RCP8.5 higher-end (95th percentile) model is selected to get a conservative impact by 2040. Exposed links were determined based upon exposure to high current and future landslide risk, exacerbated by flooding and intense precipitation. The scenario assessed the cost of losing each individual exposed road segments, calculated as the incremental cost of re-routing required between the origin and destination. Dijkstra algorithm was used to quantify this cost by comparing a reference scenario (no perturbed link) to the perturbed scenario (dropped link). The metrics used in deriving the costs due to the rerouting penalties were the route-specific total annual product transported and the cost per tonne-kilometer. The primary road network [18] was analyzed for the transportation of soybean exports along the prescribed routes. While this assessment only covers a small fraction of all potential impacts to transportation infrastructure assets, and the resulting implications for economic impacts, it provides a quantitative analysis to inform important areas for further consideration.
246
G. L’Her et al.
3 Results The total value of all assets at risk, for which data was available, totals over $358 billion USD. The value of all assets exposed to flooding exceeds $254 billion USD, including the road network, railway lines, ports, waterways, and other assets including airports and bridges. Table 3 shows that assets in locations susceptible to landslide include around 25% of all roads, 23% of railway lines, 7% of ports, and 10–17% of airports (medium and small airports, respectively), for a total of assessed asset value of $264 billion USD. Fires, often exacerbated by droughts and extreme heat, currently threaten 9–12% of the road network, 6% of railway lines, 7–10% of airports, and 12% of ports. Drought impacts minimize the operations of ports and waterway transit. Drought impacts approximately 52% of ports and 35% of waterway routes, for a total exposed asset value of over $92 billion USD. Climate change, including extreme heat and intense precipitation, may exacerbate these existing hazard risks [19]. By 2040, increase in extreme heat exposure ranges from 55% (large airports) to 94% (waterways) of assets. Intense precipitation is expected to increase as well, with more than 30–50% of infrastructure assets in areas expected to see an increase in intense precipitation events that are 20% or greater than those experienced historically, on an average annual basis, by 2040. The assessment of the 33 major soybean export routes revealed that risks from landslides are geographically concentrated along only a few corridors, but failure of these road segments can be very costly. Although only 6.3% of identified road network segments are in high-risk landslide and flooding regions, this is sufficient to create consequential impacts on the routes and associated costs. The average cost of a detour required along one of the 33 routes assessed, due to a single damaged road segment, can exceed USD$740 million in a single year, Table 1. On average, disruptions from high-risk regions under current flooding risks may increase the transportation cost from 1% (Route 24) to 152% (Route 27) of the normal route cost. This is equivalent to an increase in cost of $5 million USD (Route 24) and $744 million USD (Route 27). The median cost for transportation of soybean along the 33 routes is $360 million USD (Route 14), which sees perturbation costs under current climate hazards of $53 million USD, with an additional increase in climate change risk in 2040 totaling $29 million USD. Nearly all of the route segments currently at risk also see increased risk from intense precipitation in future climate change scenarios by 2040, Table 1. While flooding and the future intense precipitation metric used in this study are not directly correlated, areas with intense precipitation increases in 2040 may see increased flooding risks and landslides induced from intense precipitation. The total costs, routes travelled, segments at risk, and origin and destination locations for every route in this study are detailed in Fig. 1. Figure 1 also shows the total cost by State if every segment in that geography required re-routing due to failure from landslide and precipitation (flooding and/or future intense precipitation). Pará and São Paulo see the greatest total cost impacts (exceeding $1.9 billion USD if all perturbed routes failed). In Pará it is due to a sparse national road network that requires long detours to reach the destination. However, São Paulo has the greatest number of road segments exposed to high and very high landslide susceptibility, flooding, and intense precipitation. The ports in São Paulo are also the destination for
Transport Resilience and Adaptation to Climate Impacts
247
Table 1. Route-specific information and impacts from network disruptions. Disruptions calculated using segment impassability for individual segments exposed to flooding and landslides (current exposure), and landslides in regions with intense precipitations increases (2040 decade exposure). The 33 main routes were assessed, although results are only shown for routes presenting an impact (29 of them).
several of the 33 routes assessed, resulting in compounding impacts across multiple supply chains. Paraná sees high state-level costs for similar reasons: high landslide susceptibility in many locations as well as risks of flooding and intense precipitation near the port destinations. Mato Grosso sees the greatest total export of soybeans, but relatively lower risks from landslide susceptibility, reducing the overall burden for this hazard. Proactive investment to address landslide risks may reduce impacts by 5–50% [6]. The upfront costs of proactive investment will vary by route because each route has a unique number of segments at risk from landslide, flooding and/or future intense precipitation increases, and every vulnerable segment would need to be upgraded to reduce potential impacts. The final step in this study was to evaluate whether the combined cost of re-routing and repairs for each segment would exceed the costs of proactive mitigation measures and reduced re-routing impacts. Given the uncertainty of impact reduction (5– 50%, based on recent work), two analyses were done. First, the cost of upgrading a route (average cost per spot of $100,450 USD [6]) and the resulting reduction in impact was computed for both a 5% and 50% reduction in impacts (this was estimated by reducing the re-routing cost by 5% and 50%). Then, this cost, for each route, was compared to the reactive approach – which was calculated as the cost of fixing a segment once damage has occurred (average cost of $492,910 USD [6]) and the re-routing cost resulting from
248
G. L’Her et al.
Fig. 1. Estimated State-level costs resulting from route disruption. The origin, destination, route travelled, and segments considered in the drop-link analysis are also shown for each of the 33 routes.
perturbation. The results of this analysis showed that of the 6.3% of the road segments exposed, there is only 1 segment that costs more to proactively reduce landslide impacts than to repair and re-route. It should be noted that this analysis does not include other costs from disruption such as re-routing of other goods and people, which may be very significant [6]. There are several limitations to the present study. While it represents the largest export in Brazil, the soybeans market is only part of the export markets critical to the Brazilian economy. Future impacts of climate change on the intense precipitations, and their consequences on flooding and landslide risk, are difficult to quantify. A conservative approach was taken, using the upper end model of the CMIP5 catalog in that region. However, recent scenario trajectories assessments do show that this is not an outlandish assumption for the 2040 decade. Costs are also variable and dependent on goods market valuation as well as local labor situation and other factors.
4 Conclusion This study considered the impact of infrastructure exposure to current and future natural hazards, and the consequence on a critical export for Brazil with a worldwide potential impacts, the soybeans market. We show that a large amount of transport infrastructure is exposed to natural hazards and could benefit from retrofitting. Currently, we estimate that over $350 billion
Transport Resilience and Adaptation to Climate Impacts
249
of assets sit in at-risks regions and are susceptible to perturb the transportation network in Brazil, and consequently the national and regional economy. When considering classical mitigation strategies to reduce potential damages to assets, the investments are rarely recovered from a purely component consideration. However, placing the assets in the complex system they are a part of changes the equation, and costly and impactful downstream effects can be avoided by focusing on resilience. The main soybeans routes from production location to exporting ports are exposed and would benefit from hardened road segments in adequate locations, for the most exposed segments. These results highlight the need for a comprehensive approach to identifying areas of risk and where proactive upgrades are needed, as well as the resulting repair and disruption costs. Continued investment in these areas and other emergency and hazard response capabilities can further reduce the potential hazard impacts under current and climate change conditions. Acknowledgement. This work was funded by the World Bank, as a background study for the Brazil Country Climate and Development Report, 2023 [20].
References 1. Schweikert, A., Deinert, M.: Vulnerability and resilience of power systems infrastructure to natural hazards and climate change. Wiley Interdiscip. Rev. Clim. Change 12, e724 (2021). https://doi.org/10.1002/wcc.724 2. Schweikert, A.E., L’Her, G.F., Deinert, M.R.: Simple method for identifying interdependencies in service delivery in critical infrastructure networks. Appl. Netw. Sci. 6(1),(2021). https://doi.org/10.1007/s41109-021-00385-4 3. Colon, C., Hallegatte, S., Rozenberg, J.: Criticality analysis of a country’s transport network via an agent-based supply chain model. Nat. Sustain. 4(3), 209–215 (2021) 4. Schweikert, A., Nield, L., Otto, E., Deinert, M.R.: Resilience and critical power system infrastructure: lessons learned from natural disasters and future research needs. World Bank, Washington, DC, 8900 (2019). https://openknowledge.worldbank.org/handle/10986/31920. Accessed 04 Oct 2023 5. Salin, D.L.: Soybean transportation guide: Brazil 2020. United States Department of Agriculture (2021). https://www.ams.usda.gov/sites/default/files/media/BrazilSoybeanTranspor tationGuide2020.pdf. Accessed 04 Oct 2023 6. The World Bank: Improving climate resilience of federal road network in Brazil (2019). https://documents.worldbank.org/en/publication/documents-reports/documentdetail/585 621562945895470/Improving-Climate-Resilience-of-Federal-Road-Network-in-Brazil. Accessed 29 Aug 2023 7. The World Bank: World bank climate change knowledge portal - Brazil. https://climateknowl edgeportal.worldbank.org/. Accessed 29 Aug 2023 8. The World Bank: Climate risk profile: Brazil (2021). https://climateknowledgeportal.wor ldbank.org/sites/default/files/2021-07/15915-WB_Brazil%20Country%20Profile-WEB.pdf. Accessed 29 Aug 2023 9. U.S. Energy Information Administration (EIA), Brazil. https://www.eia.gov/international/ana lysis/country/BRA. Accessed 29 Aug 2023 10. The Observatory of Economic Complexity (OEC): Soybeans in Brazil. https://oec.world/en/ profile/bilateral-product/soybeans/reporter/bra. Accessed 29 Aug 2023
250
G. L’Her et al.
11. Global Landslide Hazard (2021). https://datacatalog.worldbank.org/search/dataset/0037584/ Global-landslide-hazard-map. Accessed 04 Oct 2023 12. Sampson, C.C., Smith, A.M., Bates, P.D., Neal, J.C., Alfieri, L., Freer, J.E.: A high-resolution global flood hazard model. Water Resour. Res. 51(9), 7358–7381 (2015) 13. Jenelius, E., Petersen, T., Mattsson, L.-G.: Importance and exposure in road network vulnerability analysis. Transp. Res. Part Policy Pract. 40(7), 537–560 (2006) 14. National Transportation Infrastructure Department (DNIT): Roadway Construction Costs (July 2021), Relatórios do Custo Médio Gerencial (2021). https://www.gov.br/dnit/pt-br/ assuntos/planejamento-e-pesquisa/custos-e-pagamentos/custos-e-pagamentos-dnit/customedio-gerencial. Accessed 04 Oct 2023 15. Da Silva, M.H.B.: Modelo para estimativa dos custos de construcao fas superficies paviment adas para operacoes de aeronaves em aeroportos. Universidade de Sao Paulo, Sao Paulo (2020). https://teses.usp.br/teses/disponiveis/3/3153/tde-20032020-083940/pub lico/MarcosHenriqueBuenodaSilvaCorr20.pdf. Accessed 04 Oct 2023 16. National Transportation Infrastructure Department (DNIT), “Railway Construction Costs (July 2021), Relatórios do Custo Médio Gerencial,” 2021. Accessed: Oct. 04, 2023. [Online]. Available: https://www.gov.br/dnit/pt-br/assuntos/planejamento-e-pesquisa/custose-pagamentos/custos-e-pagamentos-dnit/custo-medio-gerencial 17. NASA Center for Climate Simulation: “Nasa Earth Exchange Global Daily Downscaled Projections (NEX-GDDP).” (2015). https://www.nccs.nasa.gov/services/data-collections/landbased-products/nex-gddp. Accessed 06 May 2021 18. Meijer, J.R., Huijbregts, M.A., Schotten, K.C., Schipper, A.M.: Global patterns of current and future road infrastructure. Environ. Res. Lett. 13(6), 064006 (2018) 19. Hirabayashi, Y., et al.: Global flood risk under climate change. Nat. Clim. Change 3(9), 816–821 (2013) 20. World Bank Group: “Brazil country climate and development report.” World Bank, (2023). https://openknowledge.worldbank.org/server/api/core/bitstreams/fd36997e3890-456b-b6f0-d0cee5fc191e/content. Accessed 04 Oct 2023
Incremental Versus Optimal Design of Water Distribution Networks - The Case of Tree Topologies Vivek Anand1 , Aleksandar Pramov1 , Stelios Vrachimis3,4 , Marios Polycarpou3,4 , and Constantine Dovrolis1,2(B) 1
4
Georgia Institute of Technology, Atlanta, GA, USA [email protected] 2 The Cyprus Institute, Nicosia, Cyprus 3 KIOS Research and Innovation Center of Excellence, Nicosia, Cyprus Electrical and Computer Engineering Department, University of Cyprus, Nicosia, Cyprus
Abstract. This study delves into the differences between incremental and optimized network design, with a focus on tree-shaped water distribution networks (WDNs). The study evaluates the cost overhead of incremental design under two distinct expansion models: random and gradual. Our findings reveal that while incremental design does incur a cost overhead, this overhead does not increase significantly as the network expands, especially under gradual expansion. We also evaluate the cost overhead for the two tree-shaped WDNs of a city in Cyprus. The paper underscores the need to consider the evolution of infrastructure networks, answering key questions about cost overhead, scalability, and design efficacy. Keywords: Water distribution networks · incremental design of tree topologies · cost overhead of incrementally-designed networks
1
Introduction
Cities live. Cities evolve. Throughout human history, urban agglomerations have changed due to many factors in and out of city planners’ control such as migration, economic growth or depression, natural disasters, etc. In the modern age, a city is expected to provide several service infrastructures, including water, sewage, electricity, garbage collection, transportation, etc. Infrastructure networks should be periodically redesigned to adapt to their evolving urban environments. The design approach is typically incremental, that is, the cost of network modification is minimized in each design phase [5,6]. Incremental design can potentially lead to suboptimal network topologies [16]. On the contrary, an optimal design approach assumes that the network can be redesigned from scratch, and thus the total network cost is minimized. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 251–262, 2024. https://doi.org/10.1007/978-3-031-53503-1_21
252
V. Anand et al.
Water Distribution Networks (WDNs) can be divided into transport networks, which are made up of large pipes that transport water from sources to consumer areas, and district networks that connect pipes of smaller diameter to households. District networks often grow in tree-like topologies, while transport networks have a looped topology to increase resilience to faults and avoid disruption of service [7,11,17]. However, some cities with hilly terrain may have tree-shaped transport networks to better manage pressure variations. In this paper we focus on tree-shaped transport WDNs because this simplification allows the derivation of simple and insightful analytical expressions. Understanding the evolution of tree-shaped WDNs represents a reasonable first step, given the simplicity of this topology and the fact that it is often adopted in suburban or rural environments. The paper focuses on the following fundamental questions: 1. How does a tree-shaped WDN evolve over time, and how does this evolution compare to optimally designed tree topologies? 2. What is the “price of evolution,” i.e., the cost overhead of an incremental design relative to the corresponding optimal design for tree-shaped WDNs? 3. How does this cost overhead depend on the way the city expands spatially? In particular, how does random expansion compare to gradual expansion? In the tree topology, the objective of the design process is to create a network that interconnects a given set of water demand locations, aiming to minimize cost-related objective functions, such as construction, maintenance, and energy consumption. Most of these cost factors are directly related to the length of the pipes. For this reason, and to simplify the optimization problem, we focus on a distance-based cost formulation, where the cost of a network is the total length of its edges (pipes). Our main findings include analytical asymptotic derivations showing, for instance, that the √ cost of incremental and optimal designs for tree-shaped networks scales with N , where N is the number of nodes, under random expansion. Additionally, in our quantitative analysis for the transport WDN of a city in Cyprus, we find that the actual water network has only 6% higher length than the corresponding optimal network. We review the related work in Sect. 2. In Sect. 3, we present mathematical formulations for both optimal and incremental design problems, applying them to the specific context of topology design for tree-shaped water distribution networks. In Sect. 4, we derive expressions for the evolvability and cost overhead of the incremental design process, comparing different expansion models. In Sect. 5, we examine how the incremental design of a real WDN in Cyprus compares to an optimal tree-shaped WDN.
2
Related Work
The study of water distribution networks (WDN) optimal design, has been the subject of extensive research [13]. However, the exploration of incremental net-
Incremental Trees
253
work design, particularly in contrast to optimal design, remains relatively unexplored. Optimal WDN Design: Most of the existing literature in the domain of WDNs centers on optimal network design. Various methodologies and algorithms have been proposed (see e.g. [8–10,19]) to achieve optimized solutions based on different constraints and objectives, such as capital cost minimization, reliability enhancement, and operational performance optimization (e.g., reduction of pump energy consumption). Notable works include the application of linear programming for optimal design [8], the use of particle-swarm harmony search [10], and an exploration of the trade-off between cost and reliability [9]. However, these studies predominantly overlook the incremental design approach, often considering it merely as a stepping stone to obtain a nearly-optimal solution. Incremental vs Optimal Network Design: Two key studies in this area by Bakhshi and Dovrolis [2,3] focus on the “price of evolution” in ring and mesh networks in the context of telecommunication networks. These two papers provide the motivation and analytical framework for our study of tree-shaped WDN networks. Incremental network design, also referred to as “multi-period design,” has been also explored in other contexts, such as the design and evolution by source shifts [15], and on biologically inspired adaptive network design [18]. These works propose algorithms and optimization frameworks for incremental network design under diverse constraints and objectives. However, none of them directly compares incremental designs with corresponding optimized designs, nor do they consider different expansion models. The comparison between incremental and optimized design in the context of WDNs is a relatively unexplored area of research. One relevant study is [14], which introduced a multi-objective optimization model for incremental design, considering topology and capacity expansion.
3
Framework and Metrics
In this section, we present the analytical framework for comparing incremental and optimal network designs. This framework is quite similar to the one developed for the comparison of ring networks in the context of telecommunication infrastructures [2]. An important difference is that we adapt that framework in the context of tree networks (Fig. 1). The WDN of any city gradually changes with time. To model this, we consider a discrete time mode, where the k’th step is referred to as the k’th environment. We assume that new nodes are added to the network at each environment. There are two fundamentally different approaches to design a WDN. In the clean slate or optimal approach we aim to minimize, in every environment, the total cost of the network subject to the given constraints; we refer to the resulting network as the optimal design. The other approach is to minimize the modification cost relative to the network of the previous environment; we refer to that as the evolved or incremental design.
254
V. Anand et al.
Fig. 1. An illustration of the suboptimality (in terms of total length) of an incremental tree (335.91) compared to the optimal one (304.56). The node labels indicate the order in which nodes are added to the graph.
More formally, let N (k) be the set of acceptable networks at environment k, i.e. those networks that provide the desired function and meet the constraints of environment k. Let the cost of a particular network N ∈ N (k) be C(N ). In the optimal design, we seek to find a network Nopt (k) from the set N (k) that has minimum cost Copt (k) at environment k. Here, Nopt (k) is the optimal network at environment k. Copt (k) ≡ C(Nopt (k)),
Nopt (k) = arg min C(N ) N ∈N (k)
(1)
In the incremental design approach however, we design the new network Nevo (k) based on the network Nevo (k − 1) from the previous environment k − 1 aiming to minimize the modification cost Cmod (Nevo (k − 1); N (k)) between the evolved network Nevo (k − 1) and N (k). We denote the previous modification cost as Cmod (k), which is defined as the cost of new design elements in N (k) but not present in Nevo (k − 1), and the initial evolved network at time 0. We assume that we know the cost of the design elements at time 0. Now, we can formulate the incremental design problem as Cevo (k) ≡ C(Nevo (k)),
Nevo (k) = arg min Cmod (k) N ∈N (k)
(2)
Thus, we can express the cost of the evolved network recursively as (k ≥ 1) Cevo (k) = Cevo (k − 1) + Cmod (k)
(3)
Expanding 3, we get Cevo (k) = Cevo (0) +
k i=1
Cmod (i)
(4)
Incremental Trees
255
This means that the total cost of the evolved network at environment k is the cost of the initial network plus all the edges that were incrementally added in the last k environments. Metrics: We use three metrics to compare the two design approaches: 1. Cost Overhead v(k) is the cost of the evolved design Nevo (k), relative to the corresponding optimal design Nopt (k) at environment k: v(k) =
Cevo (k) −1≥0 Copt (k)
(5)
The bigger the cost overhead gets, the more expensive incremental networks are compared to the corresponding optimal networks. 2. Evolvability e(k) represents the cost of modifying the evolved network from environment k − 1 to k relative to the cost of redesigning the network from scratch e(k) = 1 −
Cmod (k) ≤1 Copt (k)
(6)
If evolvability is close to 1, that means it is much less expensive to modify the existing network than to redesign the network from scratch. 3. Topological Similarity t(k) between the optimal Nopt (k) and evolved Nevo (k) networks is defined as the Jaccard similarity coefficient of the two corresponding adjacency matrices. In other words, t(k) is the fraction of distinct links in either network that are present in both networks. Expansion Models: We consider two specific models for how the environment changes over time, i.e. the set of new nodes added to the network at each environment k. We consider the simplest type of expansion in which only one new node is added at each environment. Multi-node expansions can be decomposed as a series of single node expansions. The two expansion models are random and gradual. For the former, each new location is selected randomly across a set of all possible new locations L. In gradual expansion, however, we iteratively add the closest new location of L to the existing nodes of the network (Fig. 2). The case of random expansion would be more relevant when, for instance, the city planners decide that the WDN of an existing city needs to connect with a village or suburb that is not geographically adjacent to the existing city infrastructure. The case of gradual expansion, on the other hand, would be more relevant when the city grows by adding new developments next to the existing infrastructure, as it often happens when a city grows. Obviously, we expect that the modification cost in gradual expansion will be quite low relative to random expansion (Table 1).
256
V. Anand et al.
Fig. 2. Comparison of the incremental trees under Gradual and Random expansion for the same set of points. Green nodes are the points that have already been added to the tree by the current timestep, while yellow nodes are those that are yet to be added to the tree. Table 1. Notation Symbol Nevo/opt
The evolved/optimal network
Copt (k)
The cost of the optimal network at environment k
rnd/grd
Copt
Cmod (k) rnd/grd
Cmod
4
Explanation
(k) Optimal cost under random/gradual expansion Modification cost for added edges in N (k) (k) Modification cost under random/gradual expansion
v(k)
Cost overhead
e(k)
Evolvability
t(k)
Topological similarity
Tree Networks
We assume that all possible nodes of the network, as it expands, are located in a given 2D Euclidean area. This assumption is not necessarily true in real life due to the 3D curvature of the earth and other topographical concerns but it serves as a first-order approximation. Additionally, we assume that the cost of the network is the sum of the edge costs, and that each edge cost is the geodesic distance between the corresponding two nodes. The optimal tree network can be easily computed using a minimum spanning tree (MST) algorithm, such as Kruskal’s or Prim’s methods [4]. We rely on an asymptotic expression for the length of the MST, derived in [12]. Specifically, the length of the minimum spanning tree connecting n points √ in a 2D square of area A scales with A n. For random expansion, the n points can be anywhere in a given square of area A, though these results can be extended to any convex polygon, and so the
Incremental Trees
257
cost of the MST increases as rnd Copt (n) ∼
√
An ∼
√ n
(7)
In other words, A is viewed as a constant in this case. In the case of gradual expansion, the area in which the n nodes are located increases with every new node. If we assume that all possible nodes are uniformly distributed with point density σ, then the area in which n nodes are located is A = n/σ. So, the cost of the MST under gradual expansion scales as √ grd Copt ∼ A × n = n2 /σ = n Let us now calculate the modification for incremental design. If the new node added at environment k is z, then the modification cost of the new MST will be at most the distance between k and the closest node to z in the existing MST. More formally, this can be written as Cmod (k) ≤ minx∈Nevo (k−1) (||z − x||) 4.1
(8)
Random Expansion
To calculate the modification cost under random expansion, we need to recall the nearest neighbor problem: given a set S of n points and a new point z find the closest neighbor of z in S. When n points reside in a square of size A with point density σrnd , the expected value of the nearest neighbor distance is √ 1/ σrnd = 1/ n/A (see e.g. [1]). So, the modification cost under random expansion scales as: 1 rnd (n) ∼ √ Cmod n
(9)
If we add the modification cost over n − 1 node additions, we get the cost of the incrementally designed network under random expansion: rnd (n) = Cevo
n
rnd Cmod (i) ∼
√ n
(10)
i=2
Based on these asymptotic expressions, we see that the cost overhead under random expansion is expected to be constant, at least for large networks, because √ the cost of both the optimal and incremental trees scales with n, v rnd (n) ∼ constant.
(11)
In addition, from the evolvability definition, we see that the evolvability of the incremental network converges to 1: 1 − ernd (n) ∼
1 n
(12)
258
V. Anand et al.
Fig. 3. Comparison of Metrics between Random and Gradual Expansion during Single Node Expansion
Incremental Trees
259
We have confirmed these asymptotic expressions with numerical experiments. We pick points uniformly within a 2D circle before incrementally adding nodes to the minimum spanning tree. The optimal MST is calculated using Prim’s Algorithm. Figure 3a shows the evolved and optimal network costs. Figure 3c shows the modification cost whereas Fig. 3e shows the evolvability and the cost overhead. An interesting point is that the topological similarity is close to 0, as shown in Fig. 3d, even for n < 100, suggesting that there are many different almost-optimal trees. 4.2
Gradual Expansion
Under gradual expansion, we know that the new node is selected as the closest location to the nodes of the existing tree. Consequently, we can again rely on the previous nearest neighbor expression to bound the modification cost. If the potential locations of nodes are uniformly distributed with density σgrd , then the nearest neighbor is expected to be at distance 1/σgrd , which does not depend on n. As we only need to add one edge from the existing tree to the new node, the modification cost under gradual expansion does not depend on n: grd (n) ∼ constant Cmod
(13)
Using Eq. (4), we see that the cost of the evolved network under gradual expansion scales linearly with n, grd Cevo (n) =
n
grd Cmod (i) ∼ n.
(14)
i=2
So, the cost overhead under gradual expansion remains constant, at least for large network sizes, (15) v grd (n) ∼ constant The evolvability under gradual expansion scales in the same way as with random expansion 1 1 − egrd (n) ∼ (16) n The difference between random and gradual expansion is more evident in the numerical results shown in Fig. 3. Specifically, the optimal and incremental network costs scale with the square-root of the network size under random expansion, and linearly under gradual expansion – but in absolute magnitude they are much lower under gradual expansion. Additionally, the cost overhead under gradual expansion is significantly lower than that of random expansion. In both cases however, the cost overhead does not increase with network size.
260
5
V. Anand et al.
Case Study
In this section, we examine the transport component of the WDN of a city in Cyprus. We ask, how would this network compare to an optimal design connecting exactly the same set of nodes. The nodes in this network consist of water sources and water sinks; typically, for water transport networks, sources are water reservoirs and sinks are consumer areas where the consumption is measured, referred to as District Metered Areas (DMAs). Node parameters include their coordinates and elevations. Link parameters, that is the pipes connecting those nodes, include pipe lengths and diameters. The city’s WDN has two special features which make it distinct: it is split in two ‘zones’, each with their own water source. Second, both zones use a tree topology for the main transport network. If we denote the two networks as 1 2 and GZone , we are thus interested in finding their optimal counterparts GZone evo evo Zone1 2 and GZone . Gopt opt 1 2 We find the optimal trees GZone and GZone by first calculating the pairwise opt opt distance between all nodes in each zone separately, using the geodesic distance approximation. Then we calculate the minimum spanning tree over the entire pairwise distance matrix using Kruskal’s algorithm [4].
Fig. 4. Visualization of optimal and actual trees for Zone 1 and Zone 2.
In Table 2, we see that for both Zone 1 and Zone 2, the actual network is at most 6% longer than the corresponding optimal network. The topological similarity for both Zone 1 and Zone 2 is almost 60%. In Fig. 4, we see that the two trees for the optimal and actual network of each Zone have some similarities but there are also many differences in the connectivity. The rationale for these differences is something we plan to further explore, consulting with the engineers of the city’s water utility.
Incremental Trees
261
Table 2. Comparison of actual and optimal water networks for a city in Cyprus. The costs are measured in meters. We show the cost of the actual network Gevo both based on the length of the actual pipes (always placed under large roads) and based on geodesic distances. For the optimal network Gopt we can only calculate costs based on geodesic distances. Zone
# Nodes Cost of Gevo
Cost of Gevo Cost of Gopt Cost
Topological
(pipe lengths) (geodesic)
(geodesic)
Overhead Similarity
1
56
12006
11171
10540
0.06
0.61
2
59
10875
10519
10221
0.03
0.59
This city in Cyprus has expanded significantly since 1990, with major new developments and population increase. The low cost overhead shown above suggests that this expansion was probably closer to the gradual model, with new WDN nodes added close to existing ones. The fact that the topological similarity of the optimal and actual tree networks is close to 60%, on the other hand, suggests that the networks are significantly different, as there are probably many different ’close-to-optimal’ trees with about the same cost.
6
Conclusion, Limitations and Future Work
The paper contributes to the evaluation of incrementally designed networks relative to their optimal counterparts, in the context of tree topologies applied for water distribution in urban settings. Through simple and insightful analytical derivations, we show that the cost overhead of incrementally designed trees under random expansion can be significant but, it is expected to asymptotically remain constant. Under gradual expansion on the other hand, the cost overhead also does not increase with network size, and it is significantly lower in absolute terms than under random expansion. An important limitation of this study is that we focus only on the topology of the network. A real WDN also has to meet construction constraints (e.g., pipes are typically under large roads) or hydraulic constraints (e.g., minimum or maximum water pressure). These constraints are met in practice through optimized pipe sizing and placement of water pumps and tanks, while also considering the effect of changing water demands. Additionally, the optimization of a WDN considers not only the length of the pipes but also operational factors such as the electricity consumed by pumps. Further, many transport WDNs in larger cities have mesh-like structures, for reliability and redundancy. In future work, we plan to expand this investigation considering these more pragmatic constraints. Acknowledgments. This work was co-funded by the European Research Council (ERC) under the ERC Synergy grant agreement No. 951424 (Water Futures), and supported by the European Union’s Horizon 2020 Teaming programme under grant agreement No. 739551 (KIOS CoE), and the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy.
262
V. Anand et al.
References 1. Arya, S., Mount, D.M., Netanyahu, N.S., Silverman, R., Wu, A.Y.: An optimal algorithm for approximate nearest neighbor searching fixed dimensions. J. ACM (JACM) 45(6), 891–923 (1998) 2. Bakhshi, S., Dovrolis, C.: The price of evolution in incremental network design (the case of ring networks). In: Hart, E., Timmis, J., Mitchell, P., Nakamo, T., Dabiri, F. (eds.) BIONETICS 2011. LNICST, vol. 103, pp. 1–15. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32711-7 1 3. Bakhshi, S., Dovrolis, C.: The price of evolution in incremental network design: the case of mesh networks. In: 2013 IFIP Networking Conference, pp. 1–9. IEEE (2013) 4. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms. MIT press (2022) 5. Creaco, E., Franchini, M., Walski, T.M.: Accounting for phasing of construction within the design of water distribution networks. J. Water Resour. Plan. Manag. 140(5), 598–606 (2014) 6. Cunha, M., Marques, J., Creaco, E., Savi´c, D.: A dynamic adaptive approach for water distribution network design. J. Water Resour. Plan. and Manage. 145(7), 04019026 (2019) 7. Diao, K., Sweetapple, C., Farmani, R., Fu, G., Ward, S., Butler, D.: Global resilience analysis of water distribution systems. Water Res. 106, 383–393 (2016) 8. Eiger, G., Shamir, U., Ben-Tal, A.: Optimal design of water distribution networks. Water Resour. Res. 30(9), 2637–2646 (1994) 9. Farmani, R., Walters, G.A., Savic, D.A.: Trade-off between total cost and reliability for anytown water distribution network. J. Water Resour. Plan. Manag. 131(3), 161–171 (2005) 10. Geem, Z.W.: Particle-swarm harmony search for water network design. Eng. Optim. 41(4), 297–311 (2009) 11. Giudicianni, C., Di Nardo, A., Di Natale, M., Greco, R., Santonastaso, G., Scala, A.: Topological taxonomy of water distribution networks. Water 10(4), 444 (2018). https://doi.org/10.3390/w10040444 12. Jaillet, P.: Rate of convergence for the Euclidean minimum spanning tree limit law. Oper. Res. Lett. 14(2), 73–78 (1993) 13. Kapelan, Z.S., Savic, D.A., Walters, G.A.: Multiobjective design of water distribution systems under uncertainty. Water Resour. Res. 41(11), 1–15 (2005) 14. Marques, J., Cunha, M., Savi´c, D.A.: Multi-objective optimization of water distribution systems based on a real options approach. Environ. Model. Softw. 63, 1–13 (2015) 15. Prakash, R., Shenoy, U.V.: Design and evolution of water networks by source shifts. Chem. Eng. Sci. 60(7), 2089–2093 (2005) 16. Saleh, S.H., Tanyimboh, T.T.: Optimal design of water distribution systems based on entropy and topology. Water Resour. Manage 28(11), 3555–3575 (2014) 17. Shuang, Q., Liu, H.J., Porse, E.: Review of the quantitative resilience methods in water distribution networks. Water 11(6), 1189 (2019). https://doi.org/10.3390/ w11061189 18. Tero, A., et al.: Rules for biologically inspired adaptive network design. Science 327(5964), 439–442 (2010) 19. Yazdani, A., Due˜ nas-Osorio, L., Li, Q.: A scoring mechanism for the rank aggregation of network robustness. Commun. Nonlinear Sci. Numer. Simul. 18, 2722–2732 (2013)
Social Networks
Retweeting Twitter Hate Speech After Musk Acquisition Trevor Auten and John Matta(B) Southern Illinois University Edwardsville, Edwardsville, IL 62026, USA {tauten,jmatta}@siue.edu
Abstract. Using data collected from one-week periods in 2021 and 2022, both before and after billionaire Elon Musk’s acquisition of Twitter, we generated Twitter retweet networks to examine the connection between Musk and hate groups as designated by the US Southern Poverty Law Center (SPLC) in three separate hate ideologies: white nationalists / altright, anti-Semitics, and anti-LGBTQ. Utilizing the configuration model to generate random retweet networks, we successfully found a direct link between Twitter users who retweet Musk and users who retweet several SPLC-defined hate groups. Results show that Musk’s Tweets and general rhetoric have a potential appeal to hateful users on Twitter. Keywords: Twitter
1
· machine learning · network analysis
Introduction
From social networking platforms such as Facebook and Twitter (now called X ), to content communities and blogs, the internet allows people to share their thoughts, ideas, and opinions with others all over the globe in a matter of seconds. While social media allows individuals to expose themselves to a diversity of opinions, it can also result in echo chamber like environments, where an individual is selectively exposed to information that reinforces their existing views [8]. These echo chambers have the danger of leading to social extremism and radicalization, something that lonely or socially isolated young people are especially vulnerable to. It has been shown that hate groups such as white supremacists and the alt-right have been exploiting the easy to access, fast, anonymous communication of the internet as a tool to spread their messages and recruit new members to their cause [5]. In October of 2022, Elon Musk, the CEO of SpaceX and Tesla, purchased the popular social media platform Twitter for approximately $44 billion dollars. It is often true that online communities that champion freedom of speech (for example, gab.com or 4chan.org) attract individuals likely to spread hate, misinformation, and extremism [18], and this trend can be seen in Musk’s Twitter acquisition. Musk purchased the site under the claim that he wanted to apply free speech principles to the platform, but the days following the acquisition saw a measurable spike in slurs and hate speech [3,14]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 265–276, 2024. https://doi.org/10.1007/978-3-031-53503-1_22
266
T. Auten and J. Matta
In this paper, we will be analyzing the connection between Musk and Twitter users who spread hate by answering the following research question: Do Twitter users who retweet hate groups and extremists preferentially retweet Elon Musk in contrast to an apathetic random network model both prior to and after his acquisition of Twitter? This question will either be answered with the null hypothesis Hn , i.e., “No, Twitter users who retweet hate groups and extremists do not preferentially retweet Elon Musk” or with an alternative hypothesis Ha , i.e., “Yes, Twitter users who retweet hate groups and extremists do preferentially retweet Elon Musk.” To find correlations between Elon Musk and hate groups, we created two networks – a network of retweets from 2021 and a network of retweets from 2022, constructed from Twitter data posted prior to and after Elon Musk’s acquisition of Twitter respectively. Each network is directed and multi-edged, with the nodes representing Twitter users and directed edges representing retweets from the tweeter i.e., the original poster, to the retweeter i.e., the re-poster. Retweets were chosen because the literature has shown that retweets indicate interest in the original message and more importantly, trust in and agreement with the original message[12]. Musk officially acquired Twitter on 28 October, 2022. The days following the acquisition saw an immediate and measurable spike in slurs against black people, LGBTQ+, and Jews on the platform [3,14]. Press found that the seven day average of Tweets using specific hate terms prior to Musk’s acquisition of Twitter was never higher than 84 times an hour. This jumped to around 4,778 tweets with specific hate terms in the 12 h following the acquisition [14]. Musk did little to reduce this increase in hateful posts, as Musk made multiple controversial Tweets himself, such as posting and then deleting an unfounded response to the October attack on Paul Pelosi. A week later, on Nov 4th, Twitter began layoffs via email of approximately half of its 7,500 employees, with employees who helped fight misinformation being reported as among those laid off. Subsequently, Twitter announced their API would no longer support free access, and paid tiers would be available instead. Musk stated that API access was scheduled to go behind a paywall in mid February, though this decision has been delayed multiple times. The paid basic access that offers a low level of API usage was announced to be $100 a month. Musk later said that the company will provide a free API to those posting ‘good’ content. Twitter has yet to announce what their plans are for academic access, which is integral to researchers analyzing trends on the platform, such as the spread of hate speech and misinformation. In the near future, research such as what is presented in this paper could be difficult or impossible to carry out, as the Twitter API required for historical and archived tweet data becomes too expensive for many scholars and researchers to afford.
Twitter Network Analysis
2
267
Related Work
In recent years, there has been an increase in research into detecting hate speech on social media platforms in hopes of reducing the spread of hateful posts. A social network analysis of hate speech on Gab found that hate users are very densely connected to one another, and that posts by these users tend to spread faster, farther, and reach a wider audience compared to non-hateful posts [11]. Other studies have replicated this concept [7]. Gab is a social media platform that promotes itself as putting “People and free speech first” and was created to be an alternative to Twitter. Findings show that Gab attracts alt-right users, conspiracy theorists, and in general is more prevalent in hate speech when compared to Twitter [23]. The idea that hateful users are densely connected gives reason to look directly at hate groups, who can serve as hubs in networks of hate speech. Analysis of social networks and the effects of political discourse on Twitter have been examined, focusing on how individuals engage with political topics [10]. Analysis of tweet sentiment and the effect on Twitter user behavior with regards to the 2016 presidential election [21] has also been studied. Vaccari et al. found that political engagement on Twitter correlates with higher-threshold political activities such as campaigning and interacting with politicians directly [19]. Yaqub et al. [21] found Twitter was primarily used for rebroadcasting already presented opinions via retweets as opposed to sharing original opinions. Barber´ a et al. [2] further showed that individuals are much more likely to pass on information they receive from ideologically similar sources in regards to politically charged topics. These papers give credence to the idea that these platforms create ‘echo chambers’ as opposed to creating an environment that brings people together. The analysis of Twitter and the 2016 presidential election is seen further in multiple works by Sainudiin et al., with the use of Twitter retweet networks and the characterization of prominent politician and hate groups [15,16,22]. Using Twitter data collected around the 2016 US presidential election, the authors examined the linkage between five American political leaders and hate groups, successfully finding presidential candidate Donald Trump to be preferentially retweeted by these groups. The foundation for the structure of this paper’s retweet network, along with the methodologies for rejecting our presented null hypothesis, are based on these works. We expand on the methodology by analysing and comparing two separated networks from different time frames, as opposed to comparing separated Twitter accounts from a single time frame. This work presents a comparison of actual networks to so-called apathetic networks, i.e. a Twitter network where users are indifferent or apathetic to who they retweet, which are created by retaining the original nodes of the network but randomly rewiring the edges. Such a construct is often referred to as a configuration network model. A thorough discussion on the configuration network model, along with a further review of the literature can be found in [13].
268
3
T. Auten and J. Matta
Methods
In this work, we will be collecting and augmenting Twitter retweet data, constructing retweet networks, and generating apathetic random networks using the configuration model to reject the null hypothesis which states that Twitter users who retweet hate groups and extremists do not preferentially retweet Elon Musk. Each step of this methodology is described below. 3.1
Hate Group Selection
The hate groups and hateful individuals used in this study are taken from the Southern Poverty Law Center (SPLC) [1], which is a nonprofit legal advocacy organization which monitors hate groups, hate group leaders, and other extremists throughout the United States. While the SPLC has received criticisms in the past on potential bias of their included groups and individuals based on political views [17], it is currently one of the largest U.S. hate group databases available to the public. SPLC classification labels groups and individuals as hateful if their “beliefs or practices attack or malign an entire class of people, typically for their immutable characteristics.” The SPLC database is split into ideological categories, of which we will be looking at three: white-nationalists/alt-right (e.g., American Freedom Party), anti-LGBTQ (e.g., American Family Association), and anti-Semitism (e.g., Nation of Islam). Only hate groups and hateful individuals with a valid Twitter account and over 100 followers were chosen. 3.2
Data Collection and Augmentation
All retweets were collected using Twitter’s streaming API, which allows the download of tweets and specified tweet information such as author, tweet type (standard tweet, retweet, quoted tweet, etc.), and date and time posted in the form of JSON data. Data was collected from a one week time frame, December 10th to December 16th, for both 2021 and 2022, and includes every retweet from respective users during this time. This week was chosen for two reasons. Firstly, the 2022 time frame is approximately 6 weeks after Elon Musk acquired Twitter, giving time for the initial spike of hate speech from late October and November to level out. Secondly, this time frame includes the date of December 12th, on which Musk posted the following tweet: ‘My pronouns are Prosecute/Fauci’. This tweet caused controversy [6] and led to a notable spike in retweets of Musk in the days following, with around half of the total retweets of Musk in the week long time frame occurring the day following this tweet. For the 2021 data, this same week was chosen to keep the time frame of the two data sets consistent. Data was collected in three parts. The first set of data consists of all retweets of tweets posted by Elon Musk, along with the corresponding retweeting users. At the time the 2022 data was collected, Musk had over 121 million Twitter followers, making him the second most followed user on the site, exceeded only by former U.S. President Barack Obama. Given his popularity, retweets of Musk make up a large majority of total retweets in the network.
Twitter Network Analysis
269
The second set of data consists of all retweets of tweets posted by any of the hate groups or hateful individuals. Since only SPLC identified hate groups and hateful individuals with active Twitter accounts who fall within one of three hate ideologies were used, we were limited to a subset of 29 unique Twitter accounts. At the time of data collection, the white-nationalist/alt-right accounts collectively had 301,139 Twitter followers, the anti-Semitic accounts collectively had 106,328 Twitter followers, and the anti-LGBTQ accounts collectively had 232,534 Twitter followers. While the two sets of data form a complete network due to there being matching users in both, the overall breadth of the network formed is small, as only direct retweets of one of the 30 selected Twitter account are included. Thus, to expand the networks and extend the chain of retweets, we include a third set of users. We obtained a random sample of approximately 20% of users who retweeted Elon Musk on December 12th, the most active day of the one week time frame, and retrospectively added to the networks the past 200 retweets of tweets by these sampled users that occured within the one week data collection time frame. For all data collected, alternate types of tweets, such as quoted tweets (tweets where a user can add their own comment to the retweet) and reply tweets (tweets where a user can respond to the poster of the original tweet, potentially in disagreement) were filtered out from the retweets collected as both of these tweet types can represent possible disagreement with the original message. Overall, a total of 288,872 retweets involving 214,823 unique Twitter users were collected for the 2021 network and 1,765,607 retweets involving 756,926 unique Twitter users were collected for the 2022 network. 3.3
The Filtering of Bot Accounts
It should be noted that we did not actively filter out sophisticated bot accounts, as there is no easy way to detect them (if this were the case, Twitter could likely delete the accounts themselves), and bot detection is not a focus of this methodology. A more surface level bot removal, such as looking at retweets that occur within specific time-frames or at very high frequency, has not been shown in similar research [15,16,22] to have any significant effect on test results, as they found only around 1% of accounts were detected as bots using surface level methods. 3.4
Retweet Networks
Two network models were created using the data collected from 2021 and 2022, one for each dataset. Both networks were constructed using the NetworkX Python library. The networks were constructed from retweets, a simple form of directed communication between Twitter users. Retweets express interest and endorsement in the original message [12], while also showing a one sided relationship. The original tweeter likely knows less about the retweeter than the retweeter knows about them.
270
T. Auten and J. Matta
Table 1. Retweet statistics for key nodes in both the 2021 and 2022 networks. Node
2021 2022 out-degree out-unique out-degree out-unique
Elon Musk White-Nationalists Anti-Semitism Anti-LGBTQ
257780 1749 134 415
200931 1333 134 390
1504145 8797 1098 1423
676282 6812 1037 1148
The set of users who send tweets (i.e., tweeters) and retweets (i.e., retweeters) form the nodes of our retweet network. The exceptions to this rule are the hate groups and hateful individuals. The 29 hate groups are combined together into three nodes, one node for each hate ideology (e.g., any Twitter user who retweeted an anti-LGBTQ hate group would share an edge with the anti-LGBTQ node). This is because the interest is in viewing each hate ideology as a whole as opposed to any single group or individual. Each sent retweet represents a directed edge between nodes, with the outdegree of the edge originating from the original tweeter and the in-degree of the edge going to the retweeter. Since a single user can retweet tweets from another user multiple times, the network allows multiple / parallel edges between node pairs. The final networks are highly heterogeneous, with a majority of nodes sharing an edge with Elon Musk. Retweet statistics for key nodes are shown in Table 1. The total number of retweets of the listed node is given by their outdegree and the number of unique retweeters for the listed node is given by their out-unique. A visualization of the 2022 retweet network can be seen in Fig. 1, which is a sample of 1% of the full original network. The nodes for Elon Musk and 3 hate ideologies, along with retweeter nodes connecting these 4 nodes are kept from the original networks to maintain the original connected components. Musk and his retweeters (shown in magenta) make up the largest cluster of nodes, with the second being white-nationalist groups (shown in dark blue), followed by antiLGBTQ groups (shown in cyan) and anti-Semitism groups (shown in green). Retweeters of users who retweeted Musk (shown in grey) are a sampled portion of all users who retweeted Musk and as such, scale with the number of Musk retweets. 3.5
Configuration Retweet Network Model
The configuration model is an algorithm for generating uncorrelated random networks with a given degree distribution. More specifically, the configuration model can generate networks which maintain the in-degree and out-degree of each node in the network, removing any correlation between the degrees of connected vertices.
Twitter Network Analysis
271
Fig. 1. 2022 Sample Network
The configuration model algorithm works as follows. A degree distribution Pk is specified such that Pk is the fraction of vertices in the network having degree k. From this distribution, we choose a degree sequence, i.e., assigning a degree ki to each vertex where i = 1, ..., n. This can be thought of as converting each edge of the network into two ‘half’ edges, or stubs, with each half existing in the two original vertices. For directed graphs, each vertex has an in-degree j and an out-degree k, causing the degree distribution Pk to become a double distribution Pjk [13], with the generating function for the distribution being as follows: pjk xj y k , (1) G(x, y) = jk j
k
where x and y represent two different nodes, one with an in-degree half edge and one with and out-degree half edge. The set of in-degree half edges are then randomized, and the two sets of half edges are connected back together. The degrees ki and kj for each vertex are taken from the corresponding vertices in the original retweet networks, thus preserving the degree structure of the original networks. Using this model on the original retweet networks generates networks which maintain the number of retweets and tweets of each user while removing any
272
T. Auten and J. Matta
preference or bias for who is retweeting whom. These generated networks therefore present an apathetic, non-preferential retweet network. 3.6
T-Test and P-Values
To reject the null hypothesis, which states that Twitter users non-preferentially retweet Elon Musk and hate groups, it must be shown that the frequency of retweeters who retweet both Musk and a hate group decreases between the original networks and the generated apathetic retweet networks. To do this, we will look at the number of Twitter users who retweeted both Elon Musk and a hate group 2 or more times. As mentioned previously, retweeting expresses interest and endorsement of the original tweet. This endorsement of messages from a user is further increased with active retweeting, which can be seen with users who retweet a user more than once in a short period of time, in this case two or more times in one week. We refrain from looking at cases where users exclusively retweeted both Musk and a hate group three or more times, as the relativity small time frame of the dataset leaves less room for repeated retweets to be detected. To find a statistically significant change between the comparing networks, we perform a one sample t-test, which is a test which compares a single standard value (i.e., the mean number of retweeters for a single hate ideology in one of the original retweet networks) to a group of values (i.e., the mean number of retweeters for a single hate ideology in 100 generated apathetic retweet networks). A one sample t-test is calculated as follows: √ t = (x − µ)/(s/ n),
(2)
where x is the sample mean, µ is the hypothesized population mean, s is the sample standard deviation, and n is the sample size. The resulting t-value is a non-zero number which represents the variation between two means. We would expect the resulting t-values to be negative, meaning the mean number of retweets for a single hate ideology is smaller in the apathetic retweet networks than in the original networks. Using the t-value and degrees of freedom from the sample, a p-value, or probability value, can be calculated using a t-Distribution table or automatically by a statistical program. We utilized the SciPy Python library to determine the p-values. A p-value represents how likely it is that the given t-value would occur, and thus can show whether or not the null hypothesis is supported.
4 4.1
Results Comparing the 2021 and 2022 Networks
For the final retweet networks, the 2021 network consists of 214,823 nodes and 277,872 edges, and the 2022 network consists of 756,926 nodes and 1,765,607
Twitter Network Analysis
273
edges. The increase in size of the network after a year is expected due to the heterogeneous nature of the networks. A majority of nodes in both networks are retweeters of Elon Musk, who gained a large increase in Twitter followers around the time of his acquisition of the platform. The disparity in the number of edges between each network shows that Tweets by Musk were retweeted more frequently per user in 2022, likely due to Musk being more active on Twitter. This increased retweet frequency is further seen in the degree centrality of Musk in each network (i.e., the proportion of nodes in the network connected to Musk), which is 1.2 in 2021 and increases to 1.99 in 2022, showing an increase in the extent to which Musk is connected to the rest of the network. 4.2
Retweeters of Elon Musk and Hate Groups
In the 2021 network, a total of 248 users retweeted both Elon Musk and a hate group, which makes up approximately 10.5% of the total users who retweeted any hate group, regardless of who else they retweeted. In the 2022 network, the total number of users who retweeted both Musk and a hate group jumps to 5,434 users, which is approximately 48% of the users who retweeted a hate group. Interestingly, the randomly generated configuration networks had an increase in the number of users who retweeted both Musk and a hate group at least once, for both the 2021 and 2022 networks. It has been shown that having a low threshold for inclusion, in this case node degree, can introduce noise and skew results [4,9,20], so the t-testing will be conducted on a threshold of 2+ retweets to account for this. The number of unique users who retweeted both Musk and a hate group 2+ times in both original networks and apathetic generated networks can be seen in Fig. 2. The resulting t-values (i.e., t-test statistics) for users who retweeted both Elon Musk and a hate group 2+ times are shown in Table 2. A confidence interval of 99.9% will be used, corresponding to an alpha of 0.001. This alpha denotes that the given t-value is likely to occur less than 0.1% of the time under the null hypothesis. Table 2. Final t-values for 100 randomly sampled apathetic retweet networks. Node
2021 t-value
2022 t-value
White-Nationalists −32.342 −258.522 2.514 -102.188 Anti-Semitism 6.586 -282.141 Anti-LGBTQ
For the 2022 networks, the p-values i.e., the probability of seeing the corresponding t-values by random chance, are well below the alpha 0.001 for all three hate ideologies. Thus we reject the null hypothesis of non-preferential retweeting in favor of the alternative hypothesis Ha which states that Twitter users
274
T. Auten and J. Matta
preferentially retweeted Elon Musk and hate groups. For the 2021 networks, the p-value for the users who retweeted both Elon Musk and a white-nationalist hate group 2+ times are well below the alpha 0.001, but not for the other two hate ideologies.
Fig. 2. Total number of unique users who retweeted Elon Musk and a hate group 2 or more times each. The 2021 and 2022 Original columns represent the number of users in the original networks. The 2021 and 2022 Apathetic columns represent the mean number of users from the 100 generated configuration model networks. The y-axis is in log-scale to the power of 10.
5
Discussion
The presented t-values provide evidence that Elon Musk’s Tweets and rhetoric appeal to individuals who follow specific hate ideologies. In 2022, hate groups and hateful individuals who post anti-LGBTQ tweets were shown to have the furthest variation from the expected mean, i.e., the number of users who retweeted both an anti-LGBTQ hate group or individual and Elon Musk 2+ times was higher than the expected number given in the apathetic model, even when compared to the other two hate ideologies. This correlation is expected given Musk’s numerous
Twitter Network Analysis
275
anti-pronoun tweets and also explains why there was no strong correlation in the 2021 network, as he was not vocal about the subject in prior years. The t-values for white-nationalist and alt-right groups and individuals were close behind anti-LGBTQ in 2022, showing Musk attracts those with far-right views. The correlation between Musk and white-nationalist hate groups and individuals can also be seen in 2021, though to a lesser degree, indicating that Musk became more vocal in 2022 about views he already held. The increase in anti-Semitic tweets following Musk’s acquisition of Twitter is more likely due to the decrease in site regulation and firing of Twitter staff, which led to a general increase in hateful speech, as opposed to Musk himself endorsing anti-Semitic speech.
6
Conclusion
Using Twitter data collected from a one week period in 2021 and 2022, both before and after Elon Musk’s acquisition of Twitter, we examined the connection between Musk and three hate ideologies: white-nationalists/alt-right, antiSemitics, and anti-LGBTQ. Utilizing a t-test on randomly generating apathetic retweet networks, we successfully found that Twitter users who retweeted Musk were also more likely to retweet white-nationalist hate groups and hateful individuals in 2021 and all hate groups and hateful individuals in 2022. The list of hate groups and hateful individuals used in this study are by no means exhaustive or comprehensive and merely represent a sample of respective groups. With the developing ability to detect users who spread hate online [11], future looks into the Twitter network surrounding Elon Musk could give a deeper understanding of his influence and narrative. We are still in the early stages of Musk’s acquisition of Twitter, with changes to the site such as Twitter Blue subscription and paid API access still being implemented. Only time will tell what the future will hold for the platform.
References 1. The southern poverty law center (2023). https://www.splcenter.org/ 2. Barber´ a, P., Jost, J.T., Nagler, J., Tucker, J.A., Bonneau, R.: Tweeting from left to right: is online political communication more than an echo chamber? Psychol. Sci. 26(10), 1531–1542 (2015) 3. Benton, B., Choi, J.A., Luo, Y., Green, K.: Hate speech spikes on twitter after Elon Musk acquires the platform. Montclair State University, School of Communication and Media (2022) 4. Carrington, P.J., Scott, J., Wasserman, S.: Models and Methods in Social Network Analysis, vol. 28. Cambridge university press (2005) 5. Chris Hale, W.: Extremism on the world wide web: a research review. Crim. Justice Stud. 25(4), 343–356 (2012) 6. Czachor, E.M.: Elon Musk targets Dr. Anthony Fauci in viral tweet, drawing backlash (2022). https://www.cbsnews.com/news/elon-musk-anthony-fauci-viraltweet-backlash-health-experts/
276
T. Auten and J. Matta
7. Gallacher, J., Bright, J.: Hate contagion: Measuring the spread and trajectory of hate on social media (2021) 8. Garrett, R.K.: Echo chambers online?: politically motivated selective exposure among internet news users. J. Comput.-Mediat. Commun. 14(2), 265–285 (2009) 9. Hanneman, R.A., Riddle, M.: Introduction to social network methods (2005) 10. Maireder, A., Ausserhofer, J.: Political discourses on twitter: networking topics, objects, and people. Twitter Soc. 89, 305–318 (2014) 11. Mathew, B., Dutt, R., Goyal, P., Mukherjee, A.: Spread of hate speech in online social media. In: Proceedings of the 10th ACM Conference on Web Science, pp. 173–182 (2019) 12. Metaxas, P., Mustafaraj, E., Wong, K., Zeng, L., O’Keefe, M., Finn, S.: What do retweets indicate? Results from user survey and meta-review of research. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 9, pp. 658–661 (2015) 13. Newman, M.E.: The structure and function of complex networks. SIAM Rev. 45(2), 167–256 (2003) 14. Press, T.A.: Racist slurs have skyrocketed on twitter since Elon Musk’s takeover: report (2022). https://www.oregonlive.com/business/2022/11/racist-slurs-haveskyrocketed-on-twitter-since-elon-musks-takeover-report.html 15. Sainudiin, R., Yogeeswaran, K., Nash, K., Sahioun, R.: Rejecting the null hypothesis of apathetic retweeting of us politicians and SPLC-defined hate groups in the 2016 US presidential election. In: 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 250–253. IEEE (2018) 16. Sainudiin, R., Yogeeswaran, K., Nash, K., Sahioun, R.: Characterizing the twitter network of prominent politicians and SPLC-defined hate groups in the 2016 US presidential election. Soc. Netw. Anal. Min. 9, 1–15 (2019) 17. Smith, J.P.: The southern poverty law center is a hate-based scam that nearly caused me to be murdered (2019). https://www.usatoday.com/story/opinion/ 2019/08/17/southern-poverty-law-center-hate-groups-scam-column/2022301001/ 18. Sparby, E.M.: Digital social media and aggression: memetic rhetoric in 4chan’s collective identity. Comput. Compos. 45, 85–97 (2017) 19. Vaccari, C., et al.: Political expression and action on social media: exploring the relationship between lower-and higher-threshold political activities among twitter users in Italy. J. Comput.-Mediat. Commun. 20(2), 221–239 (2015) 20. Wasserman, S., Faust, K.: Social network analysis: methods and applications 2, 1–22 (1994) 21. Yaqub, U., Chun, S.A., Atluri, V., Vaidya, J.: Analysis of political discourse on twitter in the context of the 2016 us presidential elections. Gov. Inf. Q. 34(4), 613–626 (2017) 22. Yogeeswaran, K., Nash, K., Sahioun, R., Sainudiin, R.: Seeded by hate? Characterizing the twitter networks of prominent politicians and hate groups in the 2016 us election. preprint (2017). http://lamastex.org/preprints/ 2017HateIn2016USAElection.pdf 23. Zannettou, S., et al.: What is gab: a bastion of free speech or an alt-right echo chamber. In: Companion Proceedings of the The Web Conference 2018, pp. 1007– 1014 (2018)
Unveiling the Privacy Risk: A Trade-Off Between User Behavior and Information Propagation in Social Media Giovanni Livraga1 , Artjoms Olzojevs2 , and Marco Viviani2(B) 1
University of Milan, Computer Science Department, Via Celoria, 18, 20133 Milan, Italy [email protected] 2 University of Milano-Bicocca, Department of Informatics, Systems, and Communication, Edificio U14 (ABACUS), Viale Sarca, 336, 20126 Milan, Italy [email protected]
Abstract. This study delves into the privacy risks associated with user interactions in complex networks such as those generated on social media platforms. In such networks, potentially sensitive information can be extracted and/or inferred from explicitly user-generated content and its (often uncontrolled) dissemination. Hence, this preliminary work first studies an unsupervised model generating a privacy risk score for a given user, which considers both sensitive information released directly by the user and content propagation in the complex network. In addition, a supervised model is studied, which identifies and incorporates features related to privacy risk. The results of both multi-class and binary privacy risk classification for both models are presented, using the Twitter platform as a scenario, and a publicly accessible purpose-built dataset. Keywords: Complex networks · user privacy · user behavior · user-generated content · information propagation · social media
1
Introduction
Social media platforms have become a part of everyday life, enabling users to share various types of content and engage in diverse interactions with friends, acquaintances, and even strangers, in the complex networks that are generated on such platforms. The motivations driving this extensive content generation and sharing range from socio-psychological reasons, e.g., expanding social connections to feeling a sense of community [17] and boosting social capital [21], to “practical” and commercial purposes for using digital services and apps [24]. However, this widespread sharing exposes users to potential privacy risks as they leave behind a wealth of personal and sensitive information, such as birth dates, relationship status, political and religious beliefs, sexual preferences, health data, and family details. Additionally, the traces of their social interactions can further c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 277–290, 2024. https://doi.org/10.1007/978-3-031-53503-1_23
278
G. Livraga et al.
contribute to this information disclosure [5]. Unfortunately, many users remain unaware of how their data is precisely utilized, often due to negligence or difficulty understanding privacy disclaimers [22], and monitoring its diffusion on the network [23]. With the aim of increasing the user’s awareness regarding the privacy risk connected to the extent of sensitive information they disclose, the proposed study tackles several challenges. These include the identification of sensitive information from user-generated content, the consideration of sensitive information propagation through social network connections, and the extraction and/or generation of suitable features that can be interpreted in terms of privacy risk. Taking these aspects into account, the study aims first to develop an unsupervised model to generate a privacy risk score related to the disclosure and dissemination of sensitive information on social media complex networks. In particular, this score is obtained by combining two distinct scores for each user. A first score considers the information released directly by the user and that involving the user released by other members of the social network, while a second score accounts for information propagated in the user’s social circle. Additionally, a supervised model employing distinct privacy-risk features is proposed. These features are constructed to take into account the same privacy aspects as the unsupervised model. Finally, a comparison between the results obtained by the two models is conducted. Twitter is considered the target social media platform, analyzing users’ tweets, the connections between their tweets, and the propagation level of users’ tweets on the network. A dataset is built and made accessible to the wider research community, enabling further studies and advancements in the field of privacy risk assessment on social media platforms. The remainder of this paper is organized as follows. Section 2 discusses related work. Section 3 presents the proposed unsupervised and supervised models, along with details on the construction of the labeled dataset employed to instantiate them. Section 4 illustrates the results of our experimental evaluation. Finally, Sect. 5 concludes the paper and highlights some potential further research.
2
Related Work
Our work is closely related to a research line investigating approaches evaluating and quantifying the potential privacy risk for users caused by their participation in online social media communities [11]. Among the first to address this problem, Liu and Terzi [19] assign a privacy score to users considering, in combination, the sensitivity of user data and their visibility on the social platform. In our work, we build on a similar idea (higher risks for users derive from more visibly releasing more sensitive information), but we explicitly consider unstructured textual content generated by the user (while [19] considers just user profile information such as name, email, and hometown), as well as the impact on privacy risk that content released by other users can have. The consideration of both sensitivity and visibility of released content is also pursued in [2], which, however, does not propose approaches for obtaining the specific data/content
Privacy Risk: User Behavior and Information Propagation in Social Media
279
to be used for the privacy assessment, while we analyze the textual content to identify and extract sensitive information. NLP is performed to extract sensitive information from tweets in [4], but the work only considers a supervised model (we also consider a non-supervised one) to assess privacy risks. Our work is also partly inspired, concerning the definition of sensitive information categories and the general problem of assessing privacy risks in social media, by another of our previous works [20]. In that work, however, we focused on user profile information and did not consider the textual content shared by users, nor the potential scope that such content may undergo, which are instead a key contribution of the present article. Other literature work has investigated related yet orthogonal issues, which for this reason are just listed in this section. They include studies on privacy policies (e.g., [25]), privacy risks entailed by establishing new relationships such as friendships (e.g., [3]), data breaches and privacy violations (e.g., [12,18]), privacy metrics (e.g., [8,9]).
3
Privacy Risk Assessment of Users
In this section, we present the two models proposed in this study for user privacy risk assessment. Firstly, we introduce the unsupervised model, which aims to identify patterns in the data without relying on pre-existing labels. Next, we delve into the supervised model, which utilizes the labels provided by human assessors to train three distinct classifiers. For this reason, we begin by introducing the data on which we instantiated both models, sourced from Twitter (prior to its rebranding to X),1 along with details about the data labeling process. 3.1
The Twitter Dataset and the Labeling Process
The Dataset. The dataset construction started with the identification of some trending topics from Twitter in December 2022 (the period in which this study was carried out). These trending topics encompassed various subjects, including the 2022 World Cup (#fifa, #argentina), technology (#musk, #iphone), online communities (#socialnetwork), entertainment (#netflix, #amazon, #disney), public health (#covid), job opportunities (#job), political debates (#politics), conflicts (#war), religious themes (#religion), environmental sustainability (#sustainability), distinct aspects related to Sundays (#sunday), motivational content for Mondays (#mondaymotivation), the month itself (#december), festive season (#christmas), and general well-being (#happiness). Additionally, the approaching year was also considered among the trending topics (#2023). From those users discussing such trending topics, we randomly selected 100 real users (no spam profiles, no private profiles, no company profiles), 5 users for each topic. Subsequently, for each target user (referred to as user u for convenience), 1
https://www.nytimes.com/2023/08/03/technology/twitter-x-tweets-elon-musk. html, accessed on September 1, 2023.
280
G. Livraga et al.
we downloaded up to 80 tweets directly from u, and 20 from other users who mentioned u using the @ symbol (e.g., hello @u!) in their tweets. This to take into account the potential disclosure of user u’s personal information by other users. In the end, a total of 9,210 tweets were collected for the 100 considered users (for some users it was not possible to download a number of tweets equal to 80). In addition to the textual content of the user’s tweets, other related data and metadata, illustrated in Table 1, have been considered. Table 1. Attributes and related data/metadata downloaded for each user. Attribute
Type
Type
User
string
User u’s username.
Tweet
string
The content of u’s tweet. One user can post several tweets. Each tweet can contain up to 280 characters.
Biography
string
The designated section where u can provide a brief textual biography. This section is optional, and for certain users, it may remain empty, resulting in a null value.
Geolocation string
The designated section where u can input their geolocation, such as a city, region, or State, representing their presumed place of birth or residence. This section is optional, and some users may leave it blank, resulting in a null value.
Followees
integer The count of users followed by user u.
Followers
integer The count of users who follow user u.
Likes
integer The count of likes (or favorites) received by a given tweet.
Replies
integer The total count of replies (or comments) on a given tweet.
Retweets
integer The count of retweets on a given tweet
The Labeling Process. Twelve human assessors were tasked with assessing the privacy risk associated with the considered users in the dataset thus constructed. Each assessor was well-informed about the potential risks arising from sharing sensitive information on social media platforms and was familiar with the types of information considered sensitive (more details about this information are provided in Sect. 3.2). The assessors were carefully chosen to represent various professional fields, ensuring a balanced representation of gender (seven men and five women), and encompassed a wide age range from 20 to 70 years. Each assessor was assigned randomly selected Twitter user profiles to analyze. Assessors were required to gauge the risk for each user based on reading at least the user’s 50 most recent tweets and considering the interactions with those
Privacy Risk: User Behavior and Information Propagation in Social Media
281
tweets and possibly the other attributes related to the user. For each of the 100 Twitter users considered, the goal was to obtain five distinct privacy risk assessments. The privacy risk assessment was initially conducted using a multi-graded scale, i.e., 1 − 3, where 1 denotes “Not at Risk”, 2 “Partially at Risk”, and 3 “At Risk”. In cases where there was no majority agreement among the five assessments, extra evaluations were required from assessors who had not participated in the initial assessment for that user. Subsequently, the same assessors for each user were required to perform a binary privacy risk assessment (i.e., on a 1 − 2 binary scale) to assign a final score again based on the majority of assessments. In this case, 1 to denote a “Not at Risk” user while 2 an “At Risk” user. 3.2
Unsupervised Privacy Risk Assessment
The unsupervised privacy risk assessment model is designed to create two privacy risk scores that consider two essential factors: (i) sensitive information release and (ii) its dissemination scope. The first score, namely Sensitive Information Release Risk Score (SIRRS), is derived through the assessment of the release of sensitive information in the user-related content, while the second score, namely Potential Scope Risk Score (PSRS), involves the number of interactions (detailed in the following) for each user across all their generated content. The two scores are then aggregated to yield the final Global Privacy Risk Score (GPRS). This score plays a critical role in determining a potential risk class for each user, based on the selection of a given privacy threshold. Sensitive Information Release Risk Score. This score aims to assess the tendency of user u and other users who have mentioned u to release sensitive information within the textual content. It is constituted by four distinct subscores: (i) utw, which considers the release of sensitive information in u’s tweets, (ii) otw, which considers the release of sensitive information in tweets mentioning u, (iii) ub, which considers the release of sensitive information in the biography of u, and (iv) ul, which considers the release of geolocation information in the profile of u. Concerning (i) − (iii), the presence of sensitive information was detected using lists of sensitive terms associated – as proposed in [20], by taking inspiration from the definition of sensitive data in the EU GDPR – with ten sensitive information categories that include: (i) health status, (ii) ethnicity, (iii) religion, (iv) political affiliation, (v) sexual orientation, (vi) geolocation, (vii) profession, (viii) marital status, (ix) interests/passions, and (x) age. We note that the first five categories represent special category personal data according to Art. 9 of the EU GDPR, and are therefore deemed highly sensitive and in need of specific protection, unlike the remaining categories that we denoted as less sensitive. Lists of sensitive terms for each category have been taken from [6] (health status), [26] (ethnicity), [28] (religion), [16] (profession), [27] (political affiliation), [1] (sexual orientation), [13] (geolocation), [10] (marital status), [20] (interests/passions). From the point of view of calculating the SIRRS, we first specify how its sub-constituents are computed. Concerning utw, for each u’s tweet t, for each
282
G. Livraga et al.
highly sensitive information i present at least once in it, a score αti is assigned. Similarly, the presence of each less sensitive information j, yields another score βtj . The maximum overall score attainable, obtained by summing up all 10 scores for (both highly and less) sensitive information, equals 1 per tweet. The overall utw value for u is given by the average of these values over the total number N of tweets. The same holds for otw, but in this case, the considered tweets are those mentioning u. Formally: N 5 5 α + β t=1 i=1 ti j=1 tj (1) utw = otw = N Concerning αti and βtj values, it is possible to assign them in different ways. However, for simplicity, the ones already assigned and tested in [20] were used, i.e., αti = 0.15 and βtj = 0.05 in the presence of the release of sensitive information with respect to the categories considered. As utw and otw are defined, their values may vary in the range [0 − 1]. As for ub, this value is calculated in the same way as the two previous scores, but limited to the biography of user u. From a formal point of view: ub =
5 i=1
αbi +
5
βbj
(2)
j=1
where αbi and βbj are the scores obtained for each sensitive information released in u’s biography. Also in this case, the value of ub may vary in the range [0 − 1]. Finally, as regards the calculation of ul, this score takes on a value of 1 if there is a matching between a geolocation value released by u in the user profile and a geolocation value from among those in [13]. Otherwise, it takes the 0 value. The overall SIRRS value for user u is obtained as a linear combination of the previous values, which allows us to weigh some components more heavily at the expense of others (as we shall see in the experimental evaluations). Formally: (3) SIRRS = ωutw · utw + ωotw · otw + ωub · ub + ωul · ul where, ∀x ∈ {utw, otw, ub, ul} : ωx ≥ 0, ωx = 1. Hence, the final value of SIRRS may vary in the range [0 − 1]. Potential Scope Risk Score. In this preliminary study, a pretty simple strategy was used to consider the potential privacy risk associated with the propagation of information in one’s social network. One must first take into consideration that each user has different perceptions and purposes with respect to the dissemination of their information online. Some believe they have control over the level of its dissemination; others are not affected by this concern. Between these two extremes, there are many users who do not have a clear idea of the actual audience to which their content may be exposed. Our goal in this case was to identify tweets from selected users that achieve a high level of interaction compared to the average number of interactions of tweets from those same users. Interactions
Privacy Risk: User Behavior and Information Propagation in Social Media
283
include the number of likes, retweets, and comments a tweet receives. In practice, the Potential Scope Risk Score (PSRS) allows the identification of tweets that have a high potential to be viewed by a large audience, exceeding the normal reach of the tweets of the users who posted them.2 Specifically, the PSRS score for u is derived by first calculating the average interaction degree of u. If the interactions for a given tweet t surpass this average, a value of γt = 1 is returned; otherwise, a value of γt = 0 is returned. The overall PSRS value is again given by the average of the values obtained from each tweet related to u. Formally: N γt PSRS = t=1 (4) N where N is the total number of tweets considered for the user u. To compute the average interaction degree, it was necessary to remove outliers. They were identified by considering Interquantile Range (IQR) [30], with k = 1.5∗(Q3−Q1), where Q3 and Q1 represent the third and first quartiles. Values greater than Q3 + k or less than Q1 − k are considered outliers. Global Privacy Risk Score. This overall score aims to identify the privacy risk of each u user by associating the riskiness of the published content, captured by the SIRRS, and its propagation, captured by the PSRS. The Global Privacy Risk Score (GPRS) is hence obtained by linearly aggregating, through different combinations of importance weights, the SIRRS and the PSRS. Formally: GPRS = ωs · SIRRS + ωp · PSRS
(5)
where ωs and ωp represent the importance weights, and ωs +ωp = 1. In this work, different values for these weights were tested and illustrated in the experimental evaluations. As per definition, the GPRS assumes values in the [0 − 1] range. 3.3
Supvervised Privacy Risk Assessment
This section involves the supervised privacy risk assessment model, which is contrasted with the previously discussed unsupervised model. The foundation of this supervised model lies in the utilization of labeled data, as elaborated in Sect. 3.1. This data was coupled with standard Machine Learning models and the extraction of pertinent features that pertain to the disclosure of sensitive information. The supervised models employed encompass Logistic Regression, K-Nearest Neighbors, and Random Forests. Twenty distinct metadata privacy-risk features were considered, some derived from the unsupervised model as well as additional new features: (i) number of characters in the user’s biography, (ii) presence of geolocation information, (iii) number of followees, (iv) number of followers, (v) average number of likes, (vi) average number of comments, (vii) average number of retweets, (viii) average 2
This can happen, for example, when a tweet is retweeted or mentioned by users with a large following, thus amplifying its reach.
284
G. Livraga et al.
character count of all tweets associated with u, (ix) the utw score, (x) the ub score, and (xi) − (xx) the average score of the sensitive information released by the user for each of the ten sensitive information categories. This means calculating the average of the αti and βtj values of each category with respect to the number of u’s tweets. In addition, textual privacy-risk features extracted from user biography text, user tweets, and tweets mentioning target users were also considered. Specifically, they are unigram features, bi-gram features, and trigram features constituted by single terms, pairs of terms, and triples of terms with their associated Term Frequency - Inverse Document Frequency (TF-IDF) values, and, for each n-gram category, the top 500 terms related to their TF-IDF values.
4
Experimental Evaluation
This section discusses the experimental results obtained with respect to the unsupervised and supervised models presented in this work. Before detailing them, some technical details about the development of these models and the evaluation metrics used are presented. 4.1
Technical Details and Evaluation Metrics
Technical Details. The proposed models, both unsupervised and supervised, were implemented using the Python language. In particular, with regard to the classifiers used in the supervised model, the implementations provided within the scikit-learn library were used, with default parameters.3 Also for the evaluation of the results, the implementations of the evaluation metrics (illustrated in detail below) provided in the scikit-learn library were used.4 The snscrap library was used to crawl the tweets of the selected users and their additional data and metadata.5 To address the problem of class imbalance in both multi-class and binary classification, the Synthetic Minority Oversampling Technique (SMOTE) [7] technique was used. SMOTE aims to increase, via KNearest Neighbours, the number of observations of a class that has fewer observations than the one with the most observations within a dataset. This way, the dataset for multi-class classification grew from 100 to 141 total observations, with 47 observations for each class, while the dataset for binary classification grew from 100 to 128 observations, with 64 observations for each class. Finally, for the supervised model, k-fold cross-validation [29] with k = 5 was used, by employing the again the scikit-learn library.6
3 4 5 6
https://scikit-learn.org/stable/supervised learning.html. https://scikit-learn.org/stable/modules/model evaluation.html. https://github.com/JustAnotherArchivist/snscrape. https://scikit-learn.org/stable/modules/cross validation.html.
Privacy Risk: User Behavior and Information Propagation in Social Media
285
Evaluation Metrics. To evaluate both the unsupervised and supervised models with respect to (multi-class and binary) classification effectiveness versus privacy risk, standard metrics, such as accuracy (Acc.) and F1-score (F1) [14], were used. For the unsupervised model, being able to obtain real privacy risk values associated with each user (i.e., the Global Privacy Risk Score), it was also possible to assess ranking effectiveness with respect to privacy risk, using the normalized Discounted Cumulative Gain (nDCG) [15]. 4.2
Results: Unsupervised Privacy Risk Assessment
In the evaluation of the unsupervised model, the contribution of SIRRS and PSRS to the improvement of evaluation results was addressed. Specifically, we first considered several ways in which the construction of the SIRRS can contribute to increasing both classification and ranking effectiveness. As shown in Sect. 3.2, the SIRRS consists of a linear combination of distinct sub-components (see Eq. 3). Hence, we tested different weight combinations associated with them. Specifically, the three combinations are: (i) ωutw = 0.4 and ωotw = ωub = ωul = 0.2; (ii) ωutw = 0.5, ωotw = ωub = 0.2, and ωul = 0.1; and (iii) ωutw = 0.6, ωotw = 0.2, and ωub = ωul = 0.1. In addition, we evaluated effectiveness with respect to the contribution that both SIRRS and PSRS have by also combining them linearly with respect to different combinations of weights. The results of these evaluations with respect to both (multi-class and binary) classification and ranking (performed for both multi-class and binary labels) are shown in Table 2, in which the GPRS threshold was chosen by means of a greed search strategy maximizing the evaluation results. Table 2. Results of the unsupervised model w.r.t. multi-class (MC) and binary (BC) classification (or labels for ranking) taking into consideration the components of SIRRS via combinations (i)–(iii) and the contribution of SIRRS (S) and PSRS (P ) to GPRS. GPRS Computation
SIRRS (i) Acc. F1
nDCG
SIRRS (ii) Acc. F1
nDCG
SIRRS (iii) Acc. F1 nDCG
MC: MC: MC: MC: MC:
S ∗ 0.7 + P S ∗ 0.6 + P S ∗ 0.5 + P S ∗ 0.4 + P S ∗ 0.3 + P
∗ 0.3 ∗ 0.4 ∗ 0.5 ∗ 0.6 ∗ 0.7
0.48 0.49 0.52 0.53 0.51
0.48 0.49 0.52 0.54 0.52
0.96 0.96 0.95 0.95 0.94
0.50 0.54 0.57 0.52 0.49
0.50 0.54 0.58 0.53 0.50
0.96 0.96 0.96 0.95 0.94
0.53 0.54 0.54 0.52 0.49
0.53 0.55 0.55 0.53 0.50
0.95 0.96 0.96 0.95 0.94
BC: BC: BC: BC: BC:
S ∗ 0.7 + P S ∗ 0.6 + P S ∗ 0.5 + P S ∗ 0.4 + P S ∗ 0.3 + P
∗ 0.3 ∗ 0.4 ∗ 0.5 ∗ 0.6 ∗ 0.7
0.78 0.78 0.78 0.74 0.68
0.78 0.78 0.78 0.75 0.69
0.95 0.95 0.95 0.95 0.94
0.78 0.78 0.78 0.76 0.70
0.78 0.78 0.78 0.76 0.71
0.95 0.95 0.95 0.95 0.94
0.78 0.76 0.76 0.72 0.68
0.78 0.76 0.76 0.72 0.69
0.95 0.95 0.95 0.94 0.94
286
4.3
G. Livraga et al.
Results: Supervised Privacy Risk Assessment
The supervised model was evaluated with respect to (multi-class and binary) classification effectiveness versus the privacy risk of users. Here, the results are illustrated with respect to the three classifiers considered, i.e., Logistic Regression (LR), K-Nearest Neighbors (K-NNs), and Random Forests (RFs), also taking into account different feature configurations, among those illustrated in Sect. 3.3. Specifically, they are referred to as: i. TF-IDF: it includes, in addition to the 20 basic features, the TF-IDF values of the individual terms in the corpus. In this case, 22,721 unigrams and their TF-IDF values are considered; ii. TF-IDF BEST-500: in this case, the 20 basic features and the 500 highest TF-IDF values of the unigrams extracted from the texts are taken into account; iii. BI-GRAM: as the case (i.), but considering bi-grams instead of unigrams. In this case, we have 95,254 textual features; iv. BI-GRAM BEST-500: as the case (ii.), but considering bi-grams; v. TRI-GRAM: as the case (i.), but considering tri-grams. In this case, we have 106,384 textual features; vi. TRI-GRAM BEST-500: as the case (ii.), but considering tri-grams; vii. BI-TRI-GRAM BEST-500: the 500 bi-gram or tri-grams with the highest TF-IDF values. The results of the multi-class and binary classifications with respect to the three classifiers and the different feature configurations are shown in Table 3, in terms of classification accuracy. Table 3. Results of supervised multi-class and binary classification with each of the seven proposed feature configurations. Classification Multi-class Features
Classifier LR
Binary
K-NNs RFs LR
K-NNs RFs
TF-IDF
0.42 0.53
0.64 0.59 0.75
0.82
TF-IDF BEST-500
0.41 0.53
0.64 0.59 0.75
0.83
BI-GRAM
0.43 0.54
0.72 0.59 0.76
0.78 0.84
BI-GRAM BEST-500
0.41 0.53
0.74 0.59 0.75
TRI-GRAM
0.45 0.54
0.66 0.59 0.76
0.78
TRI-GRAM BEST-500
0.41 0.53
0.68 0.59 0.75
0.85
BI-TRI-GRAM BEST-500
0.44 0.53
0.75 0.59 0.75
0.87
Privacy Risk: User Behavior and Information Propagation in Social Media
4.4
287
Results: Discussion
Unsupervised Model. From the results illustrated in Sect. 4.2, we can observe without great surprise that, from a macro point of view, the effectiveness of binary classification is far more satisfactory than that of multi-class classification. This can be due to the fact that it is difficult to correctly classify a “Partially at Risk” user. This semantic can be easily affected by the subjectivity of the evaluation of the human assessors. We can in fact observe that in the case of the nDCG measure, which is based on the evaluation of a ranking and not a classification, the values are more than satisfactory in both cases. If we delve deeper into the factors that emerge as the basis of user privacy risk, there are interesting observations that apply to the classification tasks. First of all, we can observe how the best results are given, in binary classification, by the composition of the GPRS in which greater importance is given to SIRRS, or in any case until the two scores have the same importance. In multi-class classification, results depend more on the SIRRS composition; the best ones are obtained in relation to the (ii) SIRRS configuration, and when SIRRS and PSRS are equally important to GPRS. As regards the binary classification, the composition of the SIRRS does not seem to have any major impact on the final results, even if in this case they are slightly better with the (ii) configuration. Supervised Model. Even in the case of the supervised model results illustrated in Sect. 4.3, we can observe that the multi-class classification performs less satisfactorily than the binary one, but nevertheless better than the multiclass classification from the unsupervised model. This observation is also not surprising. In this case the classifiers, in particular the one based on Random Forests, are able to produce a model that makes the most of the privacy-risk features identified in this work. Globally, RF results improve with the selection of the 500 best textual features, particularly for the model that uses the BITRI-GRAM BEST-500 features. This could suggest a significant impact on the privacy risk of the information released in the texts, some of which could be sensitive (as in the case of the unsupervised model); this, however, would need further in-depth analysis of the impact of individual features through explainable AI methods.
5
Conclusions and Further Research
In this study, we delved into the landscape of privacy risks that can affect social media users. In particular, we performed a preliminary investigation focused on analyzing the risk related to the release of sensitive information in usergenerated content and its diffusion within the social network, by developing both unsupervised and supervised models. The unsupervised model, capable of generating privacy risk scores, took into account not only the direct release of sensitive information by users but also the cascading effects of content propagation. Simultaneously, the supervised model harnessed distinct privacy-risk features to
288
G. Livraga et al.
pinpoint and incorporate potential vulnerabilities. Our evaluation encompassed multi-class and binary classification scenarios, using data extracted from the Twitter platform. The insights gleaned from our preliminary study, especially from the unsupervised model, suggest a positive interplay between individual sensitive information release and its far-reaching influence across social circles. The proposed models and the obtained results would benefit from further refinement and analysis. In fact, the interplay mentioned earlier should be quantified in greater detail, both in the unsupervised and the supervised models. Concerning the unsupervised model, it will be necessary to carry out further analyses on the impact of the importance of the SIRRS components. Furthermore, concerning the supervised model, there would be a need to introduce an element of explainability concerning the effectiveness of individual features in relation to the classification process. Based on this further research, the results of the two models may also be related. Acknowledgements. This work was supported in part by project SERICS (PE00000014) under the NRRP MUR program funded by the EU - NGEU, by project KURAMi (20225WTRFN) under the PRIN 2022 MUR program, and by the EC under grants MARSAL (101017171) and GLACIATION (101070141).
Data Availability. The labeled dataset generated and used in this work is available on request from the corresponding author.
References 1. Abrams, M., Weiss, H., Giusti, S., Litner, J.: 47 terms that describe sexual attraction, behavior, and orientation (2023). https://www.healthline.com/health/ different-types-of-sexuality. Accessed 1 April 2023 2. Aghasian, E., Garg, S., Gao, L., Yu, S., Montgomery, J.: Scoring users’ privacy disclosure across multiple online social networks. IEEE Access 5, 13118–13130 (2017) 3. Akcora, C., Carminati, B., Ferrari, E.: Privacy in social networks: how risky is your social graph?. In: Proceedings of ICDE, vol. 2012, pp. 9–19 (2012) 4. Caliskan Islam, A., Walsh, J., Greenstadt, R.: Privacy detective: detecting private information and collective privacy behavior in a large social network. In: Proceedings of WPES 2014, pp. 35–46 (2014) 5. Carminati, B., Ferrari, E., Viviani, M.: Online social networks and security issues. In: Security and Trust in Online Social Networks. Synthesis Lectures on Information Security, Privacy, and Trust, pp. 1–18. Springer, Cham (2014). https://doi. org/10.1007/978-3-031-02339-2 1 6. Centers for Disease Control and Prevention. List of all diseases (2022). https:// www.cdc.gov/health-topics.html. Accessed 2 March 2022 7. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over-sampling technique. JAIR 16, 321–357 (2002) 8. De Capitani di Vimercati, S., Foresti, S., Livraga, G., Samarati, P.: Data privacy: definitions and techniques. IJUFKBS 20(6), 793–817 (2012)
Privacy Risk: User Behavior and Information Propagation in Social Media
289
9. De Capitani di Vimercati, S., Foresti, S., Livraga, G., Samarati, P.: k-Anonymity: from theory to applications. Trans. Data Priv. 16(1), 25–49 (2023) 10. Eurostat: Explained. Glossary: marital status (2019). https://ec.europa.eu/ eurostat/statistics-explained/index.php?title=Glossary:Marital status. Accessed 2 Mar 2022 11. Ferrari, E., Viviani, M.: Privacy in social collaboration. In: Michelucci, P. (ed.) Handbook of Human Computation, pp. 857–878. Springer, New York (2013). https://doi.org/10.1007/978-1-4614-8806-4 70 12. Foreman, Z., Bekman, T., Augustine, T., Jafarian, H.: PAVSS: privacy assessment vulnerability scoring system. In: Proceedings of CSCI, vol. 2019, pp. 160–165 (2019) 13. Geonames - All cities with a population > 1000 (2023). https://public. opendatasoft.com/explore/dataset/geonames-all-cities-with-a-population-1000. Accessed 1 Apr 2023 14. Hossin, M., Sulaiman, M.N.: A review on evaluation metrics for data classification evaluations. IJDKP 5(2), 1 (2015) 15. J¨ arvelin, K., Kek¨ al¨ ainen, J.: Cumulated gain-based evaluation of IR techniques. ACM TOIS 20(4), 422–446 (2002) 16. joblist.com: List of all jobs (2022). https://www.joblist.com/b/all-jobs. Accessed 2 Mar 2022 17. Kircaburun, K., Alhabash, S., Tosunta¸s, S ¸ , Griffiths, M.D.: Uses and gratifications of problematic social media use among university students: a simultaneous examination of the big five of personality traits, social media platforms, and social media use motives. IJMHA 18, 525–547 (2020) 18. K¨ okciyan, N., Yolum, P.: PriGuard: a semantic approach to detect privacy violations in online social networks. IEEE TKDE 28(10), 2724–2737 (2016) 19. Liu, K., Terzi, E.: A framework for computing the privacy scores of users in online social networks. ACM TKDD 5(1), 1–30 (2010) 20. Livraga, G., Motta, A., Viviani. M.: Assessing user privacy on social media: the Twitter case study. In: Proceedings of OASIS 2022, Barcelona, Spain, June (2022) 21. Matthews, P.: Social media, community development and social capital. CDJ 51(3), 419–435 (2016) 22. McDonald, A.M., Cranor, L.F.: The cost of reading privacy policies. ISJLP 4, 543 (2008) 23. Pei, S., Muchnik, L., Tang, S., Zheng, Z., Makse, H.A.: Exploring the complex pattern of information spreading in online blog communities. PLoS ONE 10(5), e0126894 (2015) 24. Shibchurn, J., Yan, X.: Information disclosure on social networking sites: an intrinsic-extrinsic motivation perspective. CHB 44, 103–117 (2015) 25. Watson, J., Lipford, H.R., Besmer, A.: Mapping user preference to privacy default settings. ACM TOCHI 22(6), 1–20 (2015) 26. Wikipedia contributors: List of contemporary ethnic groups – Wikipedia, the free encyclopedia (2022). https://en.wikipedia.org/wiki/List of contemporary ethnic groups. Accessed 2 Mar 2022 27. Wikipedia contributors: List of generic names of political parties – Wikipedia, the free encyclopedia (2022). https://en.wikipedia.org/wiki/List of generic names of political parties. Accessed 2 Mar 2022 28. Wikipedia contributors: List of religions and spiritual traditions – Wikipedia, the free encyclopedia (2022). https://en.wikipedia.org/wiki/List of religions and spiritual traditions. Accessed 2 Mar 2022
290
G. Livraga et al.
29. Wong, T.-T., Yeh, P.-Y.: Reliable accuracy estimates from k-fold cross validation. IEEE TKDE 32(8), 1586–1594 (2019) 30. Yang, J., Rahardja, S., Fr¨ anti, P.: Outlier detection: how to threshold outlier scores?. In: Proceedings of AIIPCC, vol. 2019, pp. 1–6 (2019)
An Extended Uniform Placement of Alters on Spherical Surface (U-PASS) Method for Visualizing General Networks Emily Chao-Hui Huang1 and Frederick Kin Hing Phoa2(B) 2
1 National Tsing Hua University, Hsinchu, Taiwan Institute of Statistical Science, Academia Sinica, Taipei, Taiwan [email protected]
Abstract. Network visualization plays an important role in exploring information about network nodes and structure. Recently, a new network visualization method, namely the Uniform Placement of Alters on Spherical Surface (U-PASS), was proposed for ego-centric networks. In this work, we extend the U-PASS method for visualizing a general undirected unweighted network, which consists of the edges between nodes, network clusters, and some nodes that are not neighbors to the ego. We develop a new criterion that can quantify the degree of uniformity of the network with a target node as the ego. The performance comparison to several state-of-the-art methods show the our extended method performs better in terms of the uniformity of node scattering on the spherical surface. Keywords: Space-filling U-PASS
1
· Uniformity · Spherical Discrepancy ·
Introduction
Network has become a common tool to describe the relationships among individual elements in a system, and it has widely been applied in biological system, metaholic pathways, anthropology, communication studies, social sciences, and many others. The networks being investigated was once small in the past due to limited computational capacity and memory, but large-scale networks are common nowadays in the era of big data as the computational facilities have advanced. When a large-scale network with thousands to millions of nodes are plotted on a two-dimensional flat surface using naive drawing methods, it is not trivial to extract useful information of the network nodes and network structure. Thus, an informative visualization method becomes a big challenge for large-scale networks. Allocating network nodes on a spherical surface has been a popular research topic in recent years because of its widely applications. Among all, several stateof-the-art algorithms have been proposed to solve this problem, which can further divided into two types. The self-organizing map (SOM) algorithm is an c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 291–300, 2024. https://doi.org/10.1007/978-3-031-53503-1_24
292
E. C.-H. Huang and F. K. H. Phoa
unsupervised artificial neural network training approach that obtains the optimal solution by competitive learning. It projects N -dimension data onto a twodimensional space while preserving the topology structure of the original data. [4] extended the SOM algorithm for email distribution network, which is a smallworld network on a spherical surface, but the result interpretation of this method is hindered by the crowdind and overlapping of the nodes and edges. An improvement of this problem was proposed in [3], which introduces a modification of SOM into two stages that an additional step is to separate overlapping nodes and edges, but the algorithm efficiency was the tradeoff of this improvement. Other than SOM and its variants, force-based algorithms, which gives an attractive force to neighboring nodes and repulsive force to the disconnected ones, are also common to use in network visualization. Among many algorithms of this kind, the Doubly Stochastic Neighbor Embedding on Spheres (DOSNES) is one of the newest with good visual presentation. Based on the stochastic neighbor embedding methods, this novel normalization method overcomes the problem of the nodes crowding together in the presence of highly imbalanced data. Recently, a new method called the Uniform Placement of Alters on Spherical Surface (U-PASS) [11] was proposed for the visualization of a specific class of networks called the ego-centric networks, which consists of an ego node and the remaining nodes are ego’s neighbors or alters. The U-PASS is a three-step method to uniformly scatter network nodes on the spherical surface that the ego is located at the center of the sphere. The uniformity is achieved by maximizing the minimum distance among all scattering nodes. To solve such a complex optimization problem, the particle swarm optimization (PSO) algorithm is employed. In this work, we aim to extend the U-PASS for a general network. It does not necessarily have an ego in the sense that not all nodes are distance-1 when we pick a node of interest as the center of the sphere. This extension will greatly broaden the real-world application of the U-PASS to the visualization of any networks, because most real-world networks are seldom that simple in the ego-centric form. To achieve our goal, our method adapts a straightforward approach that distributes the remaining non-neighboring nodes on the outer layer of a concentric sphere with longer distance from the ego. In fact, the exploration of depicting networks across multi-layer planes or concentric spheres has been studied in previous researches. For example, [2] assigned nodes to planes corresponding to distinct layers based on their node degrees, and it was applied to illustrate a protein-protein interaction, IEEE InfoVis citation and collaboration networks. [10] involved the development of a Python package capable of drawing graph layout for multi-layer networks. This package was further comparing with some preexisting solutions implemented in other programming languages. Nevertheless, our work focuses on a network visualization that helps to observe the relationship between the surrounding points (alters and non-neighboring nodes) and the ego node, and the clustering phenomena exhibited by these surrounding points. This implies that the conventional multi-layer graph layout techniques are not entirely suitable in our current investigation.
An Extended Uniform Placement of Alters on Spherical Surface (U-PASS)
293
A key feature of network presentation via the U-PASS is the uniformly allocated network nodes on the spherical surface, and it enhances the visibility of the relationships among nodes. [11] established a connection between the UPASS method and the minimum energy design (MED) to ensure the uniformity of graphical layouts, but to our best knowledge, there is currently no standard criterion for objectively comparing the uniformity of network node allocations by different methods. When uniformity on a flat plane is considered, a common practice is to measure the discrepancy of the point allocations, which quantifies the deviation between the empirical and theoretical distribution of design points within the experimental domain. Various criteria for discrepancy, such as star discrepancy, centered L2 -discrepancy, wrap-around L2 -discrepancy, mixture discrepancy, discrete discrepancy and Lee discrepancy, have been developed and their lower bounds have been well-investigated [8]. These criteria are defined on a hypercube domain. When we consider a domain on a spherical surface S3 = z ∈ R3 : z = 1 , a natural measure of uniformity is the spherical cap discrepancy [5] defined as N C (D) = sup 1C(w,h) (xk ) − N σ ∗ (C(w, h)) , SDN C(w,h) k=1
where xk ∈ D, D is an experimental design of N points on the sphere, C(w, h) is the is a spherical cap with center w and height h, and σ ∗ (C(w, h)) = σ(C(w,h)) σ(S3 ) C normalized surface area of C(w, h). The best lower bound of SDN (D) established in [1] is 1 1 C (D) > c(d)N 2 − 2d , SDN for any N element set D ⊂ Sd , where c(d) is a constant dependent only on d. It is important to note that the spherical cap discrepancy is suitable for measuring the allocation of a set of independent points. However, due to the dependencies among the positions of the nodes in our graphical layouts, the spherical cap discrepancy is inadequate as a measure of uniformity, thus we aim to develop in this work an appropriate criterion to measure the uniformity of the network point allocations on the spherical surface. In this work, our contribution not only to introduce an extended algorithm from the U-PASS for any general networks to uniformly allocate nodes on a multi-layer concentric sphere, but also to propose a novel criterion to evaluate the uniformity of the network node allocations. We assume no weights and no directions on the network edges in this work. Section 2 presents the notations and definitions used in this work. Section 3 describes our extended method and the new criterion. Section 4 provides some simulation studies and method comparisons, and some discussions are provided in the last section.
2
Notations and Definitions
We follow similar notations in [?] and rephrase here. We consider an undirected network G(V, E) with a set of nodes V and a set of edges (E) between pairs
294
E. C.-H. Huang and F. K. H. Phoa
of nodes. From this network, we designate a node of interest as the ego node, which is denoted as v00 . We assume that there are k clusters in the network and we denote {C1 , · · · , Ck } as these k clusters with sizes |Ci | = ci for i = 1, . . . , k. d for i = 1, . . . , k, All nodes in the node cluster with distance d are denoted as vij j = 1, . . . , ci , and d = 1 (neighboring nodes) or 2 (non-neighboring nodes), and 1 all remaining neighboring nodes outside all clusters are denoted them as v0j for the j = 1, . . . , N , and N = |V | − i ci − 1. For each cluster Ci , we denote Ei as set of edges in Ci with size |Ei | = ei for i = 1, . . . , k. We have |V | = |V d | + 1, d=1
where V d is the set of nodes that have distance d from the ego. Let A be the (|V | − 1) × (|V | − 1) adjacency matrix of V \ {v00 } with each element avis d vl = 1 jt
d l , vjt ) ∈ E and 0 otherwise for i, j = 0, · · · , k and s, t = 1, · · · , (|V | − 1). if (vis d In practice, we recast all nodes vij and all clusters Ci in the form of spherical d caps. Every node vij is located on the apex of a spherical cap characterized by a cone angle θij and a solid angle Ωij = 2π(1 − cos θij ). Every cluster Ci is also characterized in a similar fashion with a different solid angle Ωi . To define optimality on a uniform node allocation, we first define the distance d d and vij between two points vij on a unit sphere with the ego v00 at the origin d d d · vij of coordinate as the angle φjj = cos−1 |vij | , where vij is the coordinate d of node vij for i = 0, · · · , k. Then the uniform allocation is optimal if they d maximize the minimum of φjj for all nodes vij and clusters Ci in G(V, E). Note that the distance between a pair of nodes in a cluster needs to be adjusted by the cluster’s edge density. Similar to [11], we define an adjusted Beta index i to represent the degree of connectivity of Ci . βi = eic+c i
3 3.1
Method Three-Stage Optimization
We consider an arbitrary network. By choosing a node of interest as an ego, we display this network as a two-layer ego-centric network with distance-1 neighboring nodes and distance-2 non-neighboring nodes. The method of node allocation is an extension from the U-PASS for an ego-centric network in [?], so we modify the algorithm to fit our current set-up. In specific, given a two-layer concentric sphere, we place the neighboring nodes on the inner layer and the nonneighboring nodes on the outer one. To measure the uniformity in this set-up, we consider the uniformity of node allocation on an imaginary combined layer that we one-to-one project the nodes on the outer layer to their corresponding positions on the inner layer. Simultaneously, we hope to maintain the uniformity of the neighboring nodes and the distance-1 nodes on the inner layer as much as possible, and so as the non-neighboring nodes within the spherical caps on the outer layer. The details of the algorithm is described below. Step 1: Allocation of Cluster Center. This step is the same as the Step 1 in [11]. Every cluster is viewed as a node with a different weight, then all of
An Extended Uniform Placement of Alters on Spherical Surface (U-PASS)
295
these clusters / nodes are recasted as spherical caps with different polar angles. We claim that the dense of edges and the number of nodes in a cluster affect its weight. Therefore, the polar angle of each spherical cap can be expressed as θi = cos−1 (1−2ri ) where ri is the proportion of weight of i-th scatter nodes, and c2i the weight function is defined by w(ci , ei ) = βcii = ei +c . Finally, we optimize i the objective function f (pi , pi ) =
φii min i=i ,i,i ∈{1,··· ,k} θi +θi
by maximizing the angle
between spherical caps φii , which is the angle between two cluster centers pi and pi , Step 2: Node Allocation within Each Cluster. Now we start to allocate nodes within the regions of their clusters. Although the cluster contains both alters and their neighbors that belong to two different layers, these nodes are first considered together to remain the uniformity. Suppose there are M communities in cluster Ci , the angle of circular section m is defined as M αm = 2πwm ( wm )−1 , where wm is the weight function mentioned in Step m=1
1. We start by allocating the non-neighboring nodes of the outer layer to the unit spherical cap, and then the neighboring nodes on the inner layer are allocated by maximizing all angles of node pairs min φjj conditioned on the fixed j=j
position of the non-neighboring nodes. Finally, we project the non-neighboring nodes back to the outer layer. Step 3: Allocation of Remaining Degree-1 Alters. This step is also identical to Step 3 in [11]. All remaining nodes are allocated on the spherical surface (the inner layer) that are not covered by any spherical caps of clusters by maximizing the polar angle between any pair of nodes. 3.2
Spherical Discrepancy
We introduce a new criterion for quantifying the uniformity of the node allocation of a network on a spherical surface. Arising from inherent inter-dependency among nodes in a network, nodes with closer relationships should be distributed with shorter inter-node distances, but traditional discrepancy criteria fail to capture such phenomenon. Since the polar angle of each nodes ci is proportional to its corresponding beta index βi (see Step 1 in Sect. 3.1), we may use the beta index as a weight factor and introduce the concept of the generalized spherical N 1/βj cap discrepancy(GSD). Given Gj = N 1/β N with Gi = N , we define GSD i=1
as follows: C GSDN (D)
i
i=1
N ∗ = sup Gk 1C(w,h) (xk ) − N σ (C(w, h)) , C(w,h) k=1
296
E. C.-H. Huang and F. K. H. Phoa
where xk , D, C(w, h), and σ ∗ (C(w, h)) are defined exactly the same as in C C (D) [1]. The reduction of GSDN (D) to its traditional counterpart holds SDN when all nodes are entirely independent to one another, which brings all beta indexes to 1.
4
Performance Comparison
In this section, we use an artificial generated 60-nodes network for demonstration and method comparison. The structure of this network is given in the appendix. Among these 60 nodes, 46 of them are connected directly to the ego, thus we expect to visualize this network as a two-layer ego-centric network with 01 as the ego, 46 distance-1 nodes as the neighboring nodes on the inner layer, and 13 distance-2 nodes as the non-neighboring nodes on the outer layer. Figure 1(a) is the resulting network presentation created by our three-step extended U-PASS method. Upon projecting all nodes onto an imaginary combined layer, the distance between nodes are maximized given the consideration of distinct cluster weights shown in Fig. 1(b), which results in the minimum, mean, and variance of the solid angle Ω0 as 0.1958, 0.1122, and 0.1705 respectively.
Fig. 1. Visualization of 60-node Networks
The GSD of the node allocation in Fig. 1(b) is 0.2285, but since it is not trivial to modify other methods like SOM or DOSNES for multi-layer ego-centric networks, we have no GSD values for other methods of the same networks. In order to have a fair comparison to demonstrate the usefulness of GSD as a measure of uniformity, we consider the same one-layer ego-centric network in [11] with 50 nodes as the demonstration, and the structure is also included in the appendix. Table 1 shows the values of four criteria, including three old criteria on
An Extended Uniform Placement of Alters on Spherical Surface (U-PASS)
297
solid angles and the GSD, of five methods on the allocations of 49 neighboring nodes. Recall from [11], U-PASS outperforms other four methods with larger minimum and mean Ω0 and smaller variance Ω0 , indicating the more even spread of nodes over the whole surface and smaller variation of distance among pairs of nodes. Rather than looking at three separate measures, GSD considers these characteristics within one measure. Table 1 shows the GSD values of five methods that echoes the findings of three measures of solid angles. U-PASS has the lowest GSD at 0.2084, indicating that it outperforms all other four methods in terms of uniformity of node allocations. Table 1. Five Method Comparisons on Four Criteria
5
Method
U-PASS SOM
Min(Ω0 ) Mean(Ω0 ) Var(Ω0 ) GSD
0.0986 0.1898 0.1261 0.2084
8.73e−5 0.0746 0.3912 0.3089
SOM2 CS 0.0015 0.0911 0.6462 0.2867
9.41e−5 0.0991 0.2939 0.2909
DOSNES 0.0001 0.0874 0.2469 0.3262
Real Data Example
We apply our extended U-PASS method to a real data in social network that is commonly investigated in many researches. The bottlenose dolphin data [7] consists of 62 nodes and there are 159 edges between nodes We randomly designate some nodes from the networks to be the ego nodes. In this case, we select nodes 11, 14, 33, and 60. For each ego node, we allocate the remaining nodes on different layers of a concentric sphere according to the smallest distance between the ego and the alters. The graph layouts are visualized in Fig. 2, from which it can be clearly observed that the social behavior of each dolphin exists difference. For example, some dolphins have only one direct neighbor, such as node 11 in Fig. 2(a), implying less interaction with other dolphins. In contrast, dolphins like node 14 in Fig. 2(b) have higher degree of connectivity, indicating potential leadership roles within these group and highlighting their significance as central social nodes.
298
E. C.-H. Huang and F. K. H. Phoa
Fig. 2. Dolphin Network
6
Conclusion
In this paper, we extend the U-PASS method to a broader scope beyond egocentric networks. By differentiating all nodes into neighboring nodes to the ego node or not, we allocate the network nodes on two spherical surfaces of a concentric sphere. This visualization method helps to investigate the relationship between the ego node and the remaining nodes in the network, and it is easier to observe the cluster structure from the perspective of different egos. In addition, we also developed a criterion cakked GSD to evaluate the uniformity of node allocation on a spherical surface, and we use this measure to compare our method to other traditional methods, showing that our method provides a network with higher uniformity on the node allocations. One potential future work from this paper is to derive the lower bound for GSD, which is a common practice to most discrepancy criteria. Furthermore, we may consider the
An Extended Uniform Placement of Alters on Spherical Surface (U-PASS)
299
weights on the edges between the ego nodes and other remaining nodes by some modifications in our three-step approach.
Appendix Simulation Data in Section 4 01-(all nodes except from 41 to 53),04-06,04-11,05-07,05-10,05-12,05-14,05-46,0609,06-44,07-10,07-12,07-14,08-09,08-11,08-41,09-41,10-12,10-14,10-45,10-47,1142,11-43,12-14,14-45,21-22,21-23,21-24,21-29,22-23,23-51,24-31,24-48,24-52,2829,28-30,29-30,30-48,30-49,31-32,31-49,32-33,33-50,41-42,41-43,42-43,43-44,4546,45-47,46-47,47-48,47-49,48-51,48-52,49-50,49-52,49-53,50-51 Network in [11] 01-All,04-06,04-11,05-07,05-10,05-12,05-14,06-09,07-10,07-12,07-14,08-09,0811,10-12,10-14,12-14,21-22,21-23,21-24,21-29,22-23,24-31,28-29,28-30,29-30,3132,32-33
References 1. Beck, J.: Sums of distances between points on a sphere-an application of the theory of irregularities of distribution to discrete geometry. Mathematika 31–1, 33–41 (1984) 2. Ahmed, A., Dwyer, T., Hong, S.-H., Murray, C., Song, L., Wu, Y.X.: Visualisation and analysis of large and complex scale-free networks. EuroVis, 239–246 (2005) 3. Wu, Y., Takasuka, M.: Visualizing multivariate network on the surface of a sphere. In: Proceedings of the 2006 Asia-Pacific Symposium on Information Visualisation, vol. 60, pp.77–83 (2006) 4. Fu, X., Hong, S., Nikolov, N.S., Shen, X., Wu, Y., Xuk, K.: Visualization and analysis of email networks. In: 2007 6th International Asia-Pacific Symposium on Visualization, pp. 1–8 (2007) 5. Aistleitner, C., Brauchart, J.S., Dick, J.: Point Sets on the Sphere S2 with Small Spherical Cap Discrepancy. Discrete Computat. Geom. 48(4), 990–1024 (2012). https://doi.org/10.1007/s00454-012-9451-3 6. Shelley, D.S., Gunes, M.H.: GerbilSphere: inner sphere network visualization. Comput. Netw. 56, 1016–1028 (2012) 7. Ryan, A.R., Nesreen, K.A.: The network data repository with interactive graph analytics and visualization. In: AAAI 2015: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 4292–4293 (2015). https:// networkrepository.com 8. Fang, K.-T., Liu, M.-Q., Qin, H., Zhou, Y.-D.: Theory and Application of Uniform Experimental Designs. LNS, vol. 221. Springer, Singapore (2018). https://doi.org/ 10.1007/978-981-13-2041-5 9. Lu, Y., Corander, J., Yang, Z.: Doubly stochastic neighbor embedding on spheres. Pattern Recogn. Lett. 128, 100–106 (2019)
300
E. C.-H. Huang and F. K. H. Phoa
ˇ 10. Skrlj, B., Kralj, J., Lavraˇc, N.: Py3plex toolkit for visualization and analysis of multilayer networks. Appl. Netw. Sci. 4(1), 1–24 (2019). https://doi.org/10.1007/ s41109-019-0203-7 11. Huang, E.C.H. , Phoa, F.K.H.: Uniformly scattering neighboring nodes of an egocentric network on a spherical surface for better network visualization. In: Cherifi, H., Mantegna, R.N., Rocha, L.M., Cherifi, C., Micciche, S. (eds.) COMPLEX NETWORKS 2016 2022. Studies in Computational Intelligence, vol. 1078. Springer, Cham. (2023). https://doi.org/10.1007/978-3-031-21131-7 8
The Friendship Paradox and Social Network Participation Ahmed Medhat(B) and Shankar Iyer Meta, Computational Social Science, Menlo Park, New York, CA 94025, USA [email protected],[email protected], [email protected]
Abstract. The friendship paradox implies that, on average, a person will have fewer friends than their friends do. Prior work has shown how the friendship paradox can lead to perception biases regarding behaviors that correlate with the number of friends: for example, people tend to perceive their friends as being more socially engaged than they are. Here, we investigate the consequences of this type of social comparison in the conceptual setting of content creation (“sharing”) in an online social network. Suppose people compare the amount of feedback that their content receives to the amount of feedback that their friends’ content receives, and suppose they modify their sharing behavior as a result of that comparison. How does that impact overall sharing on the social network over time? We run simulations over model-generated synthetic networks, assuming initially uniform sharing and feedback rates. Thus, people’s initial modifications of their sharing behavior in response to social comparisons are entirely driven by the friendship paradox. These modifications induce inhomogeneities in sharing rates that can further alter perception biases. If people’s responses to social comparisons are monotonic (i.e., the larger the disparity, the larger the modification in sharing behavior), our simulations suggest that overall sharing in the network gradually declines. Meanwhile, convex responses can sustain or grow overall sharing in the network. We focus entirely on synthetic graphs in the present work and have not yet extended our simulations to real-world network topologies. Nevertheless, we do discuss practical implications, such as how interventions can be tailored to sustain long-term sharing, even in the presence of adverse social-comparison effects. Keywords: social networks · friendship paradox percolation · structural network measures
1
· bootstrap
Introduction
In a network that represents friendship relations between a group of people, if we choose a person and then choose one of that person’s friends, the friend so chosen This research was completed as part of the author’s employment at Meta. The author is currently employed at Cold Spring Harbor Laboratory’s NeuroAI Group. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 301–315, 2024. https://doi.org/10.1007/978-3-031-53503-1_25
302
A. Medhat and S. Iyer
will be relatively likely to be someone who has many friends. This oversampling is the origin of the “friendship paradox”, the phenomenon that individuals in networks will often find that their friends have (on average) more friends than they do. In 1991, Feld demonstrated that a particular version of the paradox will occur in any network with non-zero variance in the degree distribution [1]. The paradox has since been observed in both social and non-social contexts [2], with implications in both offline [3–5] and online networks [6,7]. Recent work has generalized the paradox to other network properties [6– 10]. Eom and Jo showed that a “paradox” occurs for academic productivity in scientific-collaboration networks: a researcher’s collaborators are, on average, more productive than they are [9], and Hodas et al. showed how an individual’s friends are on average more active and have access to more information [6]. Kooti et al. showed that generalized paradoxes can originate in both degree-attribute correlations and assortativity of links with respect to the attribute value [8]. Researchers have also shown how friendship-paradox-based perception biases can impact social comparison and behavior in networks. For example, while Scissors et al. observed a “like paradox” on Facebook, they found evidence that people care more about who likes their content, than they do about like volume[11], while Bollen et al. observed a”happiness paradox” as a correlate of the friendship paradox [10]. Further, Jackson showed that in a network with incomplete information, the friendship paradox induced perception biases lead to amplifying average engagement in the network [12]. Following these lines of research, this paper investigates how the interplay of the friendship paradox and social comparison affects content contribution in online social networks. We specifically consider an online social network where contributions are “posts” or “shares” and where social comparison originates from people receiving feedback and seeing their friends receive feedback. Through running simulations over model-generated synthetic networks, we study how, in the absence of any baseline sharing rate differences, the local friendship paradox and people’s response to feedback-related social comparisons combine to shape individual and overall contribution rates in a network.
2
Model Formulation
We propose a simplified model of sharing in an online social network. In our model, nodes of the network represent participants in the social network, and links of the network represent friendship ties between participants. The adjacency matrix Au,v = 1 if individuals u and v are friends and 0 otherwise. The degree of individual u is simply the sum of the adjacency matrix over all potential friends v: Au,v (1) du = v
In a sequence of time steps t = 0, 1, 2, . . ., each individual u shares content at a rate ru (t), which u adjusts over time in response to the feedback that their content receives and the feedback that u’s friends’ content receives. We
The Friendship Paradox
303
will specify the details of this adjustment procedure below. For the moment, it suffices to note that ru (t) can be interpreted as the number of pieces of content that u shares in simulated week t. For convenience, we define two local averaging procedures in the network. First, we define an average of a quantity fv (t) over the neighbors of u at time t: Au,v fv (t) Au,v fv (t) v fv (t)N,u = = v (2) du v Au,v We also define an average of fv (t) over the pieces of content produced by neighbors of u at time t: v Au,v rv (t)fv (t) (3) fv (t)N C,u = v Au,v rv (t) With these averaging procedures defined, we can now express the local structural friendship paradox σu as the ratio of the average degree of u’s friends to u’s degree: dv N,u σu = (4) du Meanwhile, we can define the sharing bias βu (t) as the ratio of the average degree over pieces of content produced by u’s friends to the average degree of u’s friends: βu (t) =
dv N C,u (t) dv N,u
(5)
The product of the local structural friendship paradox σu and sharing bias βu (t) gives the effective local paradox, which measures the ratio of the average degree over the pieces of content produced by u’s friends to u’s own degree: dv N C,u (t) du dv N,u dv N C,u (t) = × du dv N,u = σu × βu (t)
σueff (t) =
(6) (7) (8)
In our model, for simplicity, we assume that all people receive the same amount of feedback per friend: there is no engagement bias over individuals in the network. With this simplification, σueff (t) also measures the ratio of the amount of feedback on content produced by u’s friends to the amount of feedback on u’s own content. In other words, the feedback disparity is equal to the effective local paradox. We assume that each participant u updates their sharing rate in response to the social comparison implied by feedback disparity σueff (t): ⎧ 0, r ⎨ u (t) = 0 ru (t) > 0, v Au,v rv (t) = 0 ru (0), (9) ru (t + 1) = ⎩ drf (σueff (t))ru (0), ru (t) > 0, v Au,v rv (t)) > 0
304
A. Medhat and S. Iyer
Outcome 1 means that, if a person’s sharing rate hits 0, it will stay at 0. Outcome 2 means that, if all of a sharer’s friends have stopped sharing, the sharer will continue sharing at their baseline rate. Outcome 3 means that, if the prior two conditions do not hold, a person will modify their sharing rate by multiplying their baseline sharing rate ru (0) by a disparity response function drf (x) that depends upon the feedback disparity (or effective local paradox). We initialize all ru (0) = 1, meaning that all individuals in our simulations begin sharing at the same rate. In this setting, there is no sharing bias: βu (0) = 1 for all u, because the average degree over content produced by u’s friends is identical to the average degree of u’s friends. This means that the initial feedback disparity for all individuals u is entirely driven by the local structural paradox σu . However, when people update their sharing rates in light of these feedback disparities, that can induce sharing biases. Thus, over time, the feedback disparity experienced by u will be driven by a combination of the local structural paradox and inhomogeneous sharing patterns amongst u’s friends. Since u will update his or her sharing rate in light of this feedback disparity, this can result in a continuous feedback loop between the sharing bias and the feedback disparity. Thus, in the absence of any initial sharing or engagement bias, the sharing ecosystem as a whole can enter inevitable spirals of declining sharing. Our goal in this paper will be to clarify, through simulations, when this is likely to occur and, conversely, when sharing can be sustained long term. Note that, since each step in our model is deterministic, the overall sharing trajectory is uniquely determined by the structure of the network and the choice of the disparity response function drf (x). Hereafter, we abbreviate our model as FIT, short for Friendship Paradox Induced Sharing Trajectories. 2.1
Florentine Families Network
We will now illustrate our model on a commonly studied small-scale network: the “Florentine Families Network” (FFN). In this network, nodes represent prominent families in 15th century Florence and links represent business or marital ties between them [14]. Although the structural properties of FFN are distinct from online social networks, it is commonly used to get a qualitative understanding of the behavior of various centrality measures. Along the same lines, it can be used to gain a qualitative understanding of who shares over time in our model. In the diagrams in Fig. 1, we show results of running the FIT simulation on the FFN with σ ∗ = 1.5. Meaning that people whose friends receive more than 1.5 times the feedback that they receive will stop sharing in the next step. Green nodes represent those who are sharing, and black nodes represent those that have stopped sharing. Numerical values inside each green node signify the feedback disparity value at that step. The approach converges after 2 steps, with 6 nodes ceasing to share by the step 1 and one more ceasing to share by step 2.
The Friendship Paradox
305
Fig. 1. Illustrating the FIT model using the Florentine Families Network.
3
Simulation Setup
We run the FIT model on synthetically generated Erd˜ os-R´enyi [15] and Barab´ asiAlbert networks [16]. Power-law degree distributions, such as those observed in Barab´ asi-Albert networks, amplify local paradoxes relative to random networks. Thus, we simulate on these two classes of networks in order to investigate the impact of the stronger paradoxes in the Barab´ asi-Albert case. In future work, we plan to apply our model to real-world networks.
Fig. 2. Types of disparity response functions simulated.
We set up our simulations as follows:
306
A. Medhat and S. Iyer
1. We generate 10 Erd˜os-R´enyi and 10 Barab´ asi-Albert networks, each with 3000 nodes and network densities going from 0.7% to 7% (average degrees going from 20 to 200). 2. We generate a series of different disparity response functions (DRFs), as shown in Fig. 2. All functions guarantee that for a feedback disparity of 1, sharing rate adjusts to 1. We model 3 monotonically decreasing functions relative to feedback disparity, and a convex function; (a) A negative unit step function, where sharing rate adjusts to 0 if threshold k is exceeded. We generate 21 different function instances with K taking on values from 0.5 to 2.6. 1, x ≤ K drf (x) = (10) 0, x > K (b) A negative sloped linear function, with intercept K. People take on sharing rates going from K for a feedback disparity of 0 to a sharing rate of 0 for a disparity of K/(K − 1). We generate 3 different function instances, with the K intercept taking on values from 1.05 to 2.05 drf (x) = K − (K − 1)x
(11)
(c) A multiplicative inverse function. drf (x) =
1 x
(12)
(d) A convex exponential sum function, where people experience a drop in sharing rate as feedback disparity goes from 0 to 2, then start to experience an increase in sharing rate when disparity becomes greater than 2, and we cap that sharing rate at 10 per step. 1−−1.5X e + e0.35X−1 drf (x) = min , 10 (13) e1 + e−1 Each simulation follows these steps: 1. Initiate each person in the network with ru (0) = 1. We assume no engagement bias in terms of engagement per friend per post. This leads to the initial feedback disparity being equal to the local paradox. 2. At each step t + 1, (a) For all people, we use the current feedback disparity value, σueff (t) as input to the disparity response function, drf (σueff (t)), to adjust each person’s sharing rate, ru (t + 1), as detailed in the model formulation section. (b) Using the network’s revised sharing rates, we adjust feedback disparity scores of all people, σueff (t + 1). 3. We run the simulation until it converges to no people altering their sharing rates, or until the simulation runs for 52 steps. If each step signifies sharing rates over 1 week, 52 steps capture sharing trajectories over a year’s time.
The Friendship Paradox
307
Fig. 3. On the left, negative unit step function simulations run over 5 Erd˜ os-R´enyi networks of different average degrees. On the right, local friendship paradox distributions for simulated networks of different average degrees.
4 4.1
Results Negative Unit Step Function Simulations
The left plot in Fig. 3 reports results for negative unit step function disparity response functions on Erd˜ os-R´enyi networks. We observe that: 1. Almost all simulations show a gradual monotonically decreasing sharing rate trending towards an equilibrium point. 2. A threshold of 1, where people stop sharing if they experience any feedback disparity, leads to almost no one sharing in the network for all 10 simulated Erd˜ os-R´enyi Networks. 3. Simulations with thresholds greater than 1 have meaningfully higher sharing rates in equilibrium. 4. When the threshold exceeds 1, the equilibrium rate of sharing grows with the average degree in the network.
Fig. 4. Negative unit step function simulations run over 9 Barab´ asi-Albert networks of different average degrees
308
A. Medhat and S. Iyer
As shown in the right plot in Fig. 3, point 3 is explained by the narrow distribution of local paradoxes in Erd˜ os-R´enyi networks and point 4 is explained by that distribution being wider the smaller the average degree is in the network. As with Erd˜ os-R´enyi networks, simulations with step-function thresholds on Barab´ asi-Albert networks show monotonic declines in sharing rates, as shown in Fig. 4. However, the decline is more pronounced and generalizes across a wider band of thresholds, with a threshold of 2.5 showing a decline rate nearing 40% within 15 simulation steps. As shown in the right plot in Fig. 3, this wideness of effect is explained by Barab´ asi-Albert networks having both a larger average and a larger standard deviation in their local paradox distributions. To further understand why we observe a gradual asymptotic decline trend, we generate the panel of plots in Fig. 5. Two points to note for these plots: 1. The average of the weighted local paradox σueff (t) is computed for all people u who have friends who share content, to capture the extent to which sharing bias has magnified the local paradox. 2. While feedback disparity is equivalent formulaically to the weighted local paradox, however, feedback disparity is only computed for a person u who shares content at time t, and who also has friends who share content. So its average only includes weighted local paradox scores of a subset of the people included in the average weighted local paradox score. We observe the following in Fig. 5: 1. As sharing rates adjust between steps, we see a fraction of people for whom feedback disparity increases due to sharing-rate adjustments’ shifting of content creation towards higher-degree connections. These are the people who will drive a decline in the next step. As such, the rate of decline of sharing depends upon the percentage of people for whom disparity increased in the preceding step. 2. Network-level average feedback disparity declines in tandem with sharing rates. This does not equate to it declining for the group of people that are still sharing, but rather the decline is due to survivor-ship bias of remaining sharers having lower disparities than ones who’ve churned. Thereby while their disparities go up as shown in the middle plots, the average disparity in the network conversely goes down. 3. The weighted paradox grows beyond the initial local paradox value in step 0, showing how the local paradox in combination with the disparity response function drives an increase in sharing bias towards high degree nodes.
4.2
Continuous Monotonically Decreasing DRFs
We have observed that all negative step function simulations showed monotonically decreasing sharing rates that trend towards a non-zero or zero asymptote. We now explore whether, as a class of functions, monotonically decreasing DRFs
The Friendship Paradox
309
Fig. 5. Sharing and disparity patterns of a Barab´ asi-Albert graph for negative step function simulation thresholds of 1 (Top) and 2 (Bottom)
generally lead to monotonically decreasing sharing rates over time. We do this by looking at other types of monotonically decreasing continuous disparity response functions, such as multiplicative inverse function and negative sloped linear functions (illustrated in Fig. 2). It is straightforward to see that, in the first step in simulating a monotonically decreasing DRF, there can be an increase in sharing rates in cases where people with favorable disparities have sufficiently high drf values. For example, if someone with an effective local paradox less than 1 shares a thousand posts instead of 1, then in the first step, it is feasible for overall sharing to go up. Therefore, when making statements about monotonic declines, we are focused on the long-term trend, beyond the first step adjustment.
Fig. 6. Simulations results of continuous DRFs over 4 Barab´ asi-Albert Networks, for drf (x) = 1/x (blue), drf (x) = 1.05 − 0.55x (green) and drf (x) = 1.55 − 0.55x (orange)
310
A. Medhat and S. Iyer
A continuous DRF simulation assumption: Fractional sharing rates are not discretized in the simulation. This is a more accurate reflection of continuous functions, to capture the scale free effects of sharing trends that would be disrupted by probabilistic churn. In other words, if we were to set initial sharing rate to 100 then a 90% decline leads to an average of 10 if sampling is applied, rather than a sharing rate of 0 with 90% chance. We observe the following key results for continuous DRFs in Fig. 6: 1. Monotonically decreasing DRFs do not uniformly lead to a monotonically decreasing sharing rate. While continuous monotonically decreasing DRFs do overall end-up in a declining state, declines are actually not monotonic. This is evident in the network with a 20 degree average in Fig. 6, where the multiplicative inverse DRF drives a sharing rate increase between steps 2 and 3. This can happen through the occasional churn of a friend v with higher feedback than friend u, due to v’s friends having even higher feedback, without u churning, leading to a decrease in feedback disparities, which can bring sharing rates up. 2. Continuous monotonically decreasing DRFs can reach non-zero equilibrium states, rather than declining towards zero.
Fig. 7. Sharing and feedback disparity patterns of a Barab´ asi-Albert network for drf (x) = 1/x (Top) and drf (x) = 1.55 − 0.55x (Bottom).
To better understand why continuous DRFs converge around non-zero equilibria , we plot sharing rates, average weighted local paradox, average disparity and how these values change for people over a simulation run for a Barab´ asi Albert Network with 1000 nodes and a 200 average degree per node, in Fig. 7. As shown in Fig. 7, we observe that convergence does not necessarily happen via a gradually slowing decline in sharing rate to an asymptote but also through a narrowing oscillatory convergence around an equilibrium point, as is the case for drf (x) = x1 .
The Friendship Paradox
4.3
311
Convex DRFs
In this section, we look at overall sharing rates over time in the case of a convex function expressed by the convex DRF in Eq. 13, whose curve is shown in the bottom left quadrant of Fig. 2.
Fig. 8. Simulation results over 9 Barab´ asi-Albert networks of different average degrees 1−−1.5X +e0.35X−1 of the convex function drf (x) = min( e , 10) e1 +e−1
We observe in Fig. 8 that: 1. Convex DRFs can lead to sustained or increased sharing rates. 2. Direction of impact of convex DRFs is network dependent. The same Convex DRF can trigger an increase in sharing rates for one network and a decrease for another. This is a particularly stark key conclusion that can prove valuable for real world use cases. 3. Whether it causes an increase or decrease, there is always an equilibrium point for any of the networks simulated, rather than sharing trending indefinitely up or down to 0. 4.4
Positive Unit Step Function DRFs
So far we have shown that monotonically decreasing DRFs drive reductions in overall sharing rates, while convex functions can lead to a growth in sharing rates over time depending on the network they act on. In this section, we alternatively simulate a positive unit step function where people only share if their feedback disparity exceeds a threshold. In other words people are actually discouraged to share if they are receive more feedback than their friends do. We observe a somewhat counter intuitive simulation result, with the positive step function also leading to a gradual decline in sharing rates over time. This is because when people with a lot of feedback stop sharing, feedback disparities start to decrease in the network, and some people who were encouraged to share are now discouraged from doing so. We show this effect in Fig. 9. Note how, opposite to negative step functions, the positive step function threshold leads to nodes experiencing a decrease in their disparity between steps.
312
A. Medhat and S. Iyer
Fig. 9. Sharing and feedback disparity patterns of Barab´ asi-Albert networks for a threshold 1 positive step function (Top) and a threshold 1 negative step function (Bottom).
5 5.1
Concluding Remarks Summary
In this paper, we proposed and analyzed a model for studying the role that the friendship paradox can play in a social network where people can contribute content, their friends can provide feedback on that content, and everyone can see that feedback. By not factoring in engagement bias in our model, we focused entirely on how two variables can drive long-term sharing trajectories: 1. Network structure, which results in local structural friendship paradoxes. 2. How people respond to social comparison, encoded in a disparity response function. We studied Erd˜ os-R´enyi and Barab´ asi-Albert networks, and different types of DRFs, including negative and positive step functions, continuous monotonically decreasing DRFs and continuous convex DRFs. Our key findings are: 1. All DRFs lead to a sharing equilibrium point where sharing rates do not increase or decrease further. 2. Monotonicity of DRFs was overall predictive of a gradual decline in sharing rates, for both monotonically increasing and decreasing DRFs. 3. Convex DRFs can sustain or grow sharing rates. 4. The direction of impact of convex DRFs is network dependent. The same convex disparity response function can lead to a growth in sharing rates in one network and a decline in another. These findings suggest that the opportunity for sustainable sharing trajectories lies in complementary types of sharing incentives, with some incentives adapting to unfavorable social comparisons and others benefiting from favorable ones, and with some degree of tailoring of such incentives to the network this applies to.
The Friendship Paradox
5.2
313
Practical Implications for Real-World Networks
To put forward an initial foundation for a more complete network structure informed social sharing theory, we have made multiple simplifications in this research. We have chosen to focus on synthetic networks, and did not attempt to alter engagement rate biases. Also, we have not factored in the multitude of factors that can modulate social sharing propensities in the real world, including effort involved in sharing, activity thresholds, viewer preferences, and other factors that play a role in social sharing [17–20]. With that in mind, more work is needed to understand what our model can say about real-world online networks. Despite that, due to the universality of the friendship paradox as a structural bias that can have an incremental impact on engagement biases and other sharing effects, our methods can be useful as a tool for sustaining sharing in online social networks, even in the presence of adverse social comparison effects. There are 3 key opportunity areas for online networks: 1. Engagement: Networks can run causal analyses to infer their disparity response functions, and use that to up-level exposure for people with less favorable responses to their perceived feedback disparities. 2. Network Structure: Given that different networks respond differently to the same disparity response function, networks can target growing connections that work best with their respective disparity response functions. 3. Product Design: The choice of feedback on content being widely visible creates a trade-off between providing a useful signal for content creators and consumers, and triggering adverse social comparison effects impacting sharing rates. Adopting systems where feedback visibility is removed or constrained to balance these trade-offs can provide value. Prior research has made this point in relation to social comparisons on Facebook [21]. Further, our model can be useful for studying other types of friendship paradox induced social comparisons in real world offline networks, in cases where such comparisons are known to mediate behavioral responses. 5.3
Next Steps and Research Opportunities
There are multiple opportunities for future research. Of note are: 1. Understanding how the model behaves in real online networks. 2. Given the convex DRFs result, what happens when we fix degree distributions and alter assortativity? or fix both and alter transsortativity? [22]. 3. How does the model behave when we incorporate other sharing drivers and incentives?
314
A. Medhat and S. Iyer
References 1. Feld, S.L.: Why your friends have more friends than you do. Am. J. Sociol. 96(6), 1464–1477 (1991) 2. Pires, M.M., Marquitti, F.M.D., Guimaraes, P.R., Jr.: The friendship paradox in species-rich ecological networks: implications for conservation and monitoring. Biol. Conserv. 209, 245–252 (2017) 3. Nettasinghe, B., Krishnamurthy, V., Lerman, K.: Diffusion in social networks: effects of monophilic contagion, friendship paradox, and reactive networks. IEEE Trans. Netw. Sci. Eng. 7(3), 1121–1132 (2019) 4. Nettasinghe, B., Krishnamurthy, V.: “What do your friends think?”: Efficient polling methods for networks using friendship paradox. IEEE Trans. Knowl. Data Eng. 33(3), 1291–1305 (2019) 5. Geng, J., Li, Y., Zhang, Z., Tao, L.: Sentinel nodes identification for infectious disease surveillance on temporal social networks. In: IEEE/WIC/ACM International Conference on Web Intelligence, pp. 493–499 (2019) 6. Hodas, N., Kooti, F., Lerman, K.: Friendship paradox redux: your friends are more interesting than you. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 7(1), pp. 225–233 (2013) 7. Higham, D.J.: Centrality-friendship paradoxes: when our friends are more important than us. J. Complex Networks (2019) 8. Kooti, F., Hodas, N.O., Lerman, K.: Network weirdness: Exploring the origins of network paradoxes. In: Eighth International AAAI Conference on Weblogs and Social Media (2014) 9. Eom, Y.H., Jo, H.H.: Generalized friendship paradox in complex networks: the case of scientific collaboration. Sci. Rep. 4(1), 1–6 (2014) 10. Bollen, J., Gon¸calves, B., van de Leemput, I., Ruan, G.: The happiness paradox: your friends are happier than you. EPJ Data Sci. 6(1), 4 (2017) 11. Scissors, L., Burke, M., Wengrovitz, S.: What’s in a Like? attitudes and behaviors around receiving Likes on Facebook. In: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work and Social Computing, pp. 1501–1510 (2016) 12. Jackson, M.O.: The friendship paradox and systematic biases in perceptions and social norms. J. Polit. Econ. 127(2), 777–818 (2019) 13. Macy, M.W., Evtushenko, A.: Threshold models of collective behavior ii: the predictability paradox and spontaneous instigation. Sociol. Sci. 7, 628–648 (2020) 14. Breiger, R.L., Pattison, P.E.: Cumulated social roles: the duality of persons and their algebras. Soc. Netw. 8(3), 215–256 (1986) 15. Erd˝ os, P., R´enyi, A., et al.: On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci 5(1), 17–60 (1960) 16. Barab´ asi, A.L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999) 17. Oliveira, T., Araujo, B., Tam, C.: Why do people share their travel experiences on social media? Tour. Manage. 78, 104041 (2020) 18. Shu, W., Chuang, Y.-H.: Why people share knowledge in virtual communities. Soc. Behav. Personal. Int. J. 39(5), 671–690 (2011) 19. Lee, H., Park, H., Kim, J.: Why do people share their context information on Social Network Services? a qualitative study and an experimental study on users’ behavior of balancing perceived benefit and risk. Int. J. Hum Comput Stud. 71(9), 862–877 (2013)
The Friendship Paradox
315
20. Berger, J.A., Buechel, E.: Facebook therapy? why do people share self-relevant content online? Why Do People Share Self-Relevant Content Online (2012) 21. Burke, M., Cheng, J., de Gant, B.: Social comparison and Facebook: Feedback, positivity, and opportunities for comparison. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–13 (2020) 22. Ngo, S.-C., Percus, A.G., Burghardt, K., Lerman, K.: The transsortative structure of networks. Proc. R. Soc. A 476(2237), 20190772 (2020)
Dynamics of Toxic Behavior in the Covid-19 Vaccination Debate Azza Bouleimen1,2(B) , Nicol` o Pagan2 , Stefano Cresci3 , Aleksandra Urman2 , and Silvia Giordano1 1
University of Applied Science and Arts of Southern Switzerland (SUPSI), Lugano, Switzerland [email protected] 2 University of Z¨ urich (UZH), Zurich, Switzerland 3 Institute of Informatics and Telematics, National Research Council (IIT-CNR), Pisa, Italy
Abstract. In this paper, we study the behavior of users on Online Social Networks in the context of Covid-19 vaccines in Italy. We identify two main polarized communities: Provax and Novax. We find that Novax users are more active, more clustered in the network, and share less reliable information compared to the Provax users. On average, Novax are more toxic than Provax. However, starting from June 2021, the Provax became more toxic than the Novax. We show that the change in trend is explained by the aggregation of some contagion effects and the change in the activity level within communities. In fact, we establish that Provax users who increase their intensity of activity after May 2021 are significantly more toxic than the other users, shifting the toxicity up within the Provax community. Our study suggests that users presenting a spiky activity pattern tend to be more toxic.
Keywords: Online Social Networks Covid-19
1
· Communities · Toxicity ·
Introduction
Nowadays, Online Social Networks (OSNs) are a space of broad discussions and exchange of ideas capable of influencing public opinion [2] and actions in several domains [27]. Often, the structure of the network reproduces the partisanship of the users in real life. They tend to aggregate in groups of similar stances on a specific topic [8]. However, OSNs, by the design of their algorithms, are thought of favoring echo chambers and political polarization [28]. The latter can present harmful consequences on the online discussion and is susceptible to translating into real-world violence [9]. To address this noxious phenomenon, designing suitable interventions, both on the platform and on the user level, is of paramount importance [6]. Consequently, a thorough understanding of the dynamic of user’s behavior is particularly relevant [25] especially in times of c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 316–327, 2024. https://doi.org/10.1007/978-3-031-53503-1_26
Dynamics of Toxic Behavior
317
crises such as pandemics, wars, and important political moments like national elections [1,5]. Some studies highlight differences in online behavior across users from different political spectra or personality traits [13,24]. For instance, in [13], the authors conducted a survey showing, in the case of Finland, that those farthest from the political center are more likely to leverage provocation for online interactions. Additionally, supporters of left-wing parties favor protective behavior unlike right-wing supporters. Other studies, leveraging digital trace data, infer a set of features from user’s social media accounts and build machine learning models to classify them according their ethnicity identification or political affiliation for instance [18,22]. These models achieve high performance but they do not provide insights on the user’s features that characterize each class which is highly important when it comes to understanding user’s behavior online. In [7], the authors study online user polarization during the 2014 Soma disaster in Turkey, revealing that political polarization can hamper the effectiveness of response operations. The analysis of the existing literature reveals some studies that focus on specific contexts like accidents or disasters [7], and others that do not focus on a particular event. Still, none of them convey a complete understanding of the user’s behavior online and what shapes it. To shed light on this complex phenomenon, a broader and multi-focused approach is needed, so as to build a complete understanding of the complex human dynamic behavior on OSNs. In this context, our present contribution consists of observing user’s behavior, in particular toxicity, during the Covid-19 online vaccination discussion in Italy. Toxicity is among the harmful behaviors that can be identified online and that significantly reduces the quality of the conversation. It is defined as a “rude, disrespectful, or unreasonable comment that is likely to make someone leave a discussion” [19]. Through this study, we aim to get fresh insights into the possible differences between groups of users and the kind of interventions to design in order to entertain a healthy and safe online discussion. We start from the observations made in [4] where the authors identified two polarized communities in the network. They observed the toxicity within these two communities and found that one of the communities is on average more toxic than the other over time. However, at a certain point in time, the trend is inverted. We significantly extend the research of [4] by highlighting important differences in the behavior and structures of the two main communities (Sect. 3). We then analyze the evolution of the toxicity within the communities across time and provide explanations for the observed change in the trend (Sect. 4). Finally, we summarize the findings and draw insights into differences in users’ behavior in the context of controversial discussions online (Sect. 5).
2
Data Processing
We base our study on the VaccinItaly dataset [20], a collection of ∼12 million tweets relative to the Covid-19 pandemic in Italy. The dataset covers the period
318
A. Bouleimen et al.
from Dec. 20th 2020 to Oct. 22nd 2021. The data collection was based on a set of Italian keywords related to vaccines. The choice of the keywords was made in a way that reflects the discussion of both pro-vaccination and anti-vaccination users [20]. Around half of the tweets are retweets, and the other half is almost equally split between original tweets, replies, and quotes. The dataset involves 551,816 unique users, where 86% of them have less than 20 tweets in the dataset. To obtain a set of users that is representative of the discussion, we selected a subset that abides by a set of criteria adapted from [26]. Specifically, we define core users those users that: (i ) have at least 20 tweets in the dataset, and (ii ) published at least a tweet per week for a consecutive duration of 3 months. This choice allows us to identify 9,278 core users (1.7%) who are responsible for nearly half of the tweets in the whole dataset. To obtain the toxicity scores of the tweets, we used Detoxify [10], a neural network model that achieved top scores in multiple Kaggle competitions and that was profitably used in several studies to compute toxicity scores for social media content [15,21]. Detoxify includes a multilingual model for non-English texts. For example, it achieved AUC of 89.18% for the Italian language [10]. The model returns a score ranging from 0 (low toxicity) to 1 (high toxicity). In addition to toxicity, we also measured the credibility of the shared links in the dataset by resorting to NewsGuard [16]. NewsGuard is an organization of trained journalists to track data points on news websites which are consequently used to automatically score the credibility of a website. Scores range from 0 (low credibility) to 100 (high credibility). Thanks to this strategy we obtained credibility scores for 58% of the links shared by the core users.
3
Network Communities
The purpose of the study being to characterize the dynamic of user’s behavior on OSNs, we build the retweet network of users (resolution parameter equal to 0.7). It is a directed weighted network where nodes represent the core users and edges represent retweets. The weights of the edges represent the number of times a retweet happened from one core user to another. The obtained network has 8,975 nodes, 643,907 weighted edges, and is built based on 2,214,149 retweets. 3.1
Community Detection
We applied the Louvain community detection algorithm [3] to the retweet network, obtaining two main communities which gather 87% of all nodes in the network. The third largest community is constituted by 384 users. The remaining nodes were partitioned into 191 additional communities of much smaller size. We qualitatively analyzed the tweets of the nodes with the highest authority score in the two main communities [12]. This score reflects the tendency of a node to be the source of information in the network. In one community, the
303 core users exclusively retweeted non-core users and were therefore absent from the final graph.
Dynamics of Toxic Behavior
319
nodes with high authority scores tweet content in favor of the vaccines while in the other community, the nodes with high authority scores are against the adoption of vaccines and the government’s measures to contain the spread of the virus. The same observations are found when analyzing the most retweeted tweets in every community or the tweets of the most central users in the two communities. Hence, we deduce that one community is dominated by a Pro vaccination discourse and the other one is dominated by an Anti vaccination discourse. In the following we will refer to these two communities as the Provax community (3,980 nodes) and the Novax community (3,831 nodes). A representation of the retweet network and the communities is shown in Fig. 1. When inspecting the third largest community, and some of the smaller ones, we noticed that they group users favorable for the vaccines or news pages that share reliable information about the vaccines. In the rest of the paper, we will refer to the 192 additional communities as Other.
Fig. 1. Retweet network of the Covid-19 Italian online debate. The Provax community is blue-colored, the Novax one is orange-colored, and the remaining users are greycolored. (Color figur online)
3.2
Stability of Communities
In addition to analyzing the network in Fig. 1, obtained by aggregating all data over the whole data collection period, we also built a different network for each month. For each of these networks we then obtained the main communities by means of the Louvain algorithm. Then, for every two consecutive months, we computed the Jaccard similarity between the communities found in the respective networks. The two communities having the highest similarity were matched,
320
A. Bouleimen et al.
assuming that they represent the same community that “evolved” from one month to the next one. This dynamic analysis, covering a time period spanning from December 2020 to October 2021, reveals that the majority of the Provax and Novax members remain in their respective community throughout time. The few members that change community, move to other marginal communities. Nevertheless, there is a minority of users that switches between the Provax and Novax communities. Given that the communities are overall stable over time, it is reasonable to study the evolution of the behavior of users over time for the two communities calculated on the retweet network of Fig. 1. 3.3
Differences Between Provax and Novax
In this section, we discuss some characteristics of the Provax and Novax communities related to their activity, credibility, network structure, and toxicity. In Table 1, we compare most of these characteristics. Table 1. Observations on the Provax and Novax communities. users tweets Provax 3,980 Novax 3,831
tweets w/ URLs credibility egdes
1,684,722 256,830 3,698,329 574,807
85.86 53.43
clustering density coefficient reciprocity
150,152 0.009 397,827 0.271
0.14 0.24
0.0339 0.0665
In terms of activity, Provax and Novax present different behavior. We notice that, even though there are slightly fewer Novax users, they post twice as much as the Provax users (3,698,329 and 1,684,722 tweets respectively). In fact, the average Novax user posted around 865 tweets while a Provax would post around 423 tweets. When it comes to sharing URLs, both communities have a similar rate of tweets containing URLs which is around 15%. In addition, the mean credibility of the shared content, as reflected by the NewsGuard scores, is very different between the two communities. Provax have a mean score of 85.86 indicating links from high credibility domains, while Novax have a mean score of 53.43 suggesting content from questionable sources. On a network structure level, the two communities present additional differences. For instance, the Novax community has more than twice the number of edges (retweet ties) of the Provax community. Novax is three times more dense than the Provax one, 1.7 times more clustered, and has two times more reciprocal ties. Overall, we conclude that the Novax users are much more active, denser, and more clustered than the Provax users. The users against the adoption of the vaccines are grouped in one main community (Novax) while the ones in favor of them are split into multiple communities with different sizes as seen when analyzing the Other communities in the partition. Moreover, the quality of the information circulating in the Novax community is significantly lower than the quality in Provax one.
Dynamics of Toxic Behavior
4
321
Investigating Toxic Behaviors
Next, we study the daily average toxicity of the text written by the users belonging to the Provax and Novax communities. Figure 2 shows that, from the beginning of the data collection until June 2021, the toxicity level of the Provax is lower than that of the Novax. Interestingly, this trend is inverted starting from mid-June where the Provax becomes noticeably more toxic than Novax. Meanwhile, the toxicity of Other remains significantly lower than that of both main communities throughout the collection period. Overall, the toxicity of the Provax, Novax, and Other communities increases across time as shown by the Mann-Kendall test for trends [11]. However, the Provax toxicity rate increases faster than the Novax one. To test the significance of the observed difference we ran a CUSUM test [17], finding that even though the difference is small, it is highly significant (p-value < 0.001). In summary, Fig. 2 shows that Provax and Novax have a statistically different behavior, characterized by a different trend with respect to the evolution of toxicity within the two communities. Additionally, Fig. 2 suggests that the two polarized communities (Provax and Novax) tend to present more extreme behavior than the remaining communities. Finally, the overall increase in toxicity across time in Fig. 2 might entail a possible increase of the toxicity within the individual users. This hypothesis is rejected following the results from the Mann-Kendall test for trends. In fact, we found that ∼86% of users in the network do not have a trend, ∼10% of users increase toxicity throughout time, and ∼4% decrease it. In the remainder of this section, we formulate and test the two following hypotheses to explain the change in the toxicity trends of the Provax and Novax communities observed in Fig. 2: – Hypothesis 1 (H1): An increase in the interaction level between Provax and Novax happened around May – June 2021, which led to a contagion effect. – Hypothesis 2 (H2): A change in activity within the communities happened around May – June 2021, such that the most active users after that period are more toxic than average.
4.1
Testing Hypothesis H1
We compute the daily interaction rate between the Novax and Provax communities. Specifically, we consider that user A interacted with user B if A retweeted, quoted, or replied to B. Figure 3 shows an increase in the rate of interactions from Provax to Novax starting from May 2021, while the interactions from Novax to Provax seem to decrease starting from that period. These observations are confirmed by the Mann-Kendall test of trends (p-value < 0.05). In addition, we calculated the same interaction rates for the most toxic users defined as the ones whose toxicity belongs to the last quartile in every community. We found that, overall, the most toxic users in both communities have higher interaction rates with the opposite community. Note that, after May, this rate increases for the
322
A. Bouleimen et al.
Fig. 2. Daily average toxicity in the written text for Provax and Novax communities. A moving average on a 7-day window was applied to the plot.
most toxic Provax, and decreases for the most toxic Novax. More precisely, before May 2021, the most toxic Provax users have on average a 6.5 times higher rate of interactions with Novax than the rest of the Provax. During the same period, the most toxic Novax users have a 2.8 times higher rate of interaction with Provax than the rest of the Novax. Whereas, after May 2021, the gap in interaction rates between the most toxic Provax and the rest of the Provax doubles. In fact, the most toxic Provax users become ∼13 times more likely to interact with Novax, compared to the rest of the Provax users. This observation is not valid for the Novax users after May 2021: the most toxic Novax users are on average two times more likely to interact with Provax, compared to the rest of the Novax.
Fig. 3. Daily interaction rate from the Provax to the Novax (in blue) and from the Novax to the Provax (in orange). (Color figure online)
Dynamics of Toxic Behavior
323
Considering the aforementioned observations and the trend in Fig. 3, we can conclude that for the Provax, over time, there is an increase in the rate of interaction with the Novax, and that the most toxic Provax users are the ones who are more likely to interact with the Novax. This likelihood increases even more after May, when we start noticing the change in the toxicity trends in Fig. 2. Therefore, a possible contagion effect could have occurred from Novax to Provax, which would explain the increase in toxicity observed within the Provax starting from June. However, the amount of Provax-Novax interactions remains limited compared to the overall interactions for every community (∼3% for the Provax and ∼5% for the Novax). Consequently, in spite of the existence of a contagion effect, it is unlikely that this alone could motivate the change in the toxicity trend shown in Fig. 2. We thus conclude that Hypothesis H1 cannot be fully retained and other possible explanations should be explored. 4.2
Testing Hypothesis H2
To investigate the possible impact of a change in activity on the toxicity of communities, we plot in Fig. 4 the number of tweets posted per day for the Provax, Novax, and Other communities. We notice that the activity levels of the Provax and Novax are similar until the end of April when we see an increase in the activity of the Novax and a decrease in the activity of the Provax. This made the difference in tweeting between the two communities important starting in May. Throughout the whole collection period, the activity of the Other is inferior to the ones of the Provax and the Novax and slowly decreases across time. Based on these observations, we split the users within the Provax and Novax into two subgroups: users that increased activity (in terms of number of tweets) after May 2021 and users that decreased activity after May 2021. We plotted the toxicity evolution for these subgroups for both Provax and Novax communities. There was no statistically significant difference between each pair of subgroups. Then, we decided to measure the activity of users differently. Instead of measuring their activity by counting the number of tweets they posted before and after May 2021, we compute the number of tweets posted by every user divided by the number of days during which that user was active. This measure reflects the activity intensity of the user and their tendency to have spikes in their tweeting pattern. With this split, only 18% of the Provax users increase their activity intensity after May 2021 while 62% of the Novax do so. The evolution of toxicity of the newly defined subgroups is plotted in Fig. 5. In Fig. 5, we can see that the Provax users who increase the intensity of activity are significantly more toxic than the ones who decrease the intensity. The difference is not that clear between the Novax users who increase and decrease intensity after May 2021. All four trends in the plot are increasing (Mann-Kendall p-value < 0.001). In Table 2 we present the mean toxicity scores for the four subgroups. In fact, we can see that on average Provax users that increase intensity after May are more toxic than the other Provax users (0.15 vs 0.11). In addition, their toxicity increase of 0.03 after May, while the Provax users that decrease intensity have their toxicity increase of 0.02. Meanwhile, the Novax subgroups
324
A. Bouleimen et al.
Fig. 4. Number of tweets posted per day for the Provax, Novax, and Other communities.
Fig. 5. Average of daily toxicity of the Provax and Novax subgroups.
do not have any significant difference between the two corresponding subgroups before and after May 2021. Figure 5 and Table 2 support that Provax users who increase intensity after May 2021 are more toxic than the ones who decrease the intensity of their activity. Therefore, we accept Hypothesis H2. This observation is likely to explain the important increase of toxicity within the Provax community that exceeds the one of the Novax in Fig. 2. In fact, given the small difference in toxicity between the Novax users who increase and decrease intensity after May 2021, the overall increase of toxicity within the Novax is not as pronounced as the one observed within the Provax community. Overall, the change in the trend between Provax and Novax communities observed around June is most likely to result from the aggregation of both effects of contagion (H1) and changes in activity within communities (H2). Provax and
Dynamics of Toxic Behavior
325
Table 2. Mean toxicity of the Provax and Novax subgroups. * Provax / Novax users who increase / decrease activity intensity after May 2021. Group of user
Share of users in community
Total mean toxicity
Mean toxicity before May 2021
Mean toxicity after May 2021
Provax increase intensity*
0.18
0.15
0.14
0.17
Provax decrease intensity*
0.82
0.11
0.1
Novax increase intensity*
0.62
0.13
0.12
0.13
Novax decrease intensity*
0.38
0.12
0.12
0.12
0.12
Novax had different evolutions of activity, characterized by different toxicity levels of the users that increased intensity after May 2021.
5
Discussion and Conclusions
In this work, we studied the behavior of users on social media in the context of controversial topics. From a dataset on the Covid-19 vaccines discussion in Italy, we identified 9,278 core users and built the corresponding retweet network. Leveraging the Louvain community detection algorithm, we identified two main communities: Provax and Novax. The remaining communities in the partition are much smaller but are primarily in favor of the vaccination campaign in Italy. The analysis of the communities revealed that the communities are stable over the whole period covered by the dataset. The Novax users are more active and share less reliable information compared to the Provax users. They form groups that are denser and more clustered than the Provax. Moreover, while most of the users against the adoption of the Covid19 vaccine belong to one main community (Novax), the users in favor of the vaccines are spread across several communities in the network. This suggests that users against the Covid-19 vaccination are more engaged in the discussion, more clustered together, and have a higher potential for coordination than users in favor of the vaccination. Measuring the toxicity within the network, we found that the overall toxicity increases over time. On a community level, Provax and Novax are significantly more toxic than the remaining smaller communities. This suggests, in compliance with other research [14,23], that more polarized communities tend to get more extreme. In addition, we found that, on average, Novax are more toxic than the Provax. However, starting from June 2021, the Provax community become more toxic than the Provax one, suggesting a possible increase in the toxicity of the Provax users. Going more in depth, we rejected this hypothesis as most of the users do not present any trend in the evolution of their toxicity over time. Alternatively, we found that the change in the trend observed is mainly caused by the fact that the overall activity of Provax decreases after May 2021. Yet, the Provax users who increase the intensity of their activity after that date are the ones more toxic on average, driving the community’s average toxicity up. This phenomenon is exacerbated by a possible small contagion effect that happens from the Novax to the Provax. In fact, the interaction rate from the Provax to the Novax increases starting from May 2021.
326
A. Bouleimen et al.
Our work has several implications. First, the differences in the observed behavior between Provax and Novax highlight the complex interplay between users’ opinions and their collective behavior. We suggest it is necessary to further explore whether similar observations can be made in the communities divided by other opinion cleavages and, if so, examine what determines the observed differences. Second, even if the toxicity within the communities increases over time, this does not translate into an increase in the toxicity of the individual. The drivers of the change in the toxicity of the user are still unknown so far. However, measuring the activity intensity of the users revealed that an increase in intensity is correlated with a higher toxicity level. Users that present higher spikes in activity patterns are potentially more toxic members of the network. This spiky activity pattern might indicate a behavior that is rather triggered by an external event in particular than the expression of a steady continuous involvement in the online discussion. Possible future research directions include studying the reaction of users to specific events and understanding the reasons behind the decrease in activity of Provax after May 2021. Acknowledgments. This work is partially funded by the Swiss National Science Foundation (SNSF) via the SINERGIA project CARISMA (grant CRSII5 209250), https://carisma-project.org.
References 1. Alyukov, M., Kunilovskaya, M., Semenov, A.: Wartime media monitor (warmm2022): a study of information manipulation on russian social media during the Russia-Ukraine war. In: Proceedings of the 7th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Soc. Sci. Hum. Lit., pp. 152–161 (2023) 2. Anstead, N., O’Loughlin, B.: Social media analysis and public opinion: The 2010 UK general election. J. Comput.-Mediat. Commun. 20(2), 204–220 (2015) 3. Blondel, V.D., Guillaume, J.L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. J. Stat. Mech.: Theory and Experiment 2008(10), P10,008 (2008). https://doi.org/10.1088/1742-5468/2008/10/P10008 4. Bouleimen, A., Pagan, N., Cresci, S., Urman, A., Nogara, G., Giordano, S.: User’s reaction patterns in online social network communities. In: The 2023 NetSci Satellite on Communities in Networks (ComNets’23) (2023) 5. Budak, C.: What happened? the spread of fake news publisher content during the 2016 us presidential election. In: The World Wide Web Conference, pp. 139–150 (2019) 6. Cresci, S., Trujillo, A., Fagni, T.: Personalized interventions for online moderation. In: Proceedings of the 33rd ACM Conference on Hypertext and Social Media, pp. 248–251 (2022) ¨ Political polarization during extreme events. 7. Ertan, G., Comfort, L., Martin, O.: Nat. Hazards Rev. 24(1), 06022,001 (2023) 8. Gaines, B.J., Mondak, J.J.: Typing together? clustering of ideological types in online social networks. J. Inf. Technol Politics 6(3–4), 216–231 (2009)
Dynamics of Toxic Behavior
327
9. Gallacher, J.D., Heerdink, M.W., Hewstone, M.: Online engagement between opposing political protest groups via social media is linked to physical violence of offline encounters. Soc Media+ Soc. 7(1), 2056305120984,445 (2021) 10. Hanu, L., Unitary team: Detoxify. Github. https://github.com/unitaryai/detoxify (2020) 11. Kendall, M.: Rank correlation methods (1955) 12. Kleinberg, J.M., et al.: Authoritative sources in a hyperlinked environment. In: SODA, vol. 98, pp. 668–677 (1998) 13. Koiranen, I., Koivula, A., Malinen, S., Keipi, T.: Undercurrents of echo chambers and flame wars: party political correlates of social media behavior. J. Inf. Technol. Politics 19(2), 197–213 (2022) 14. Lee, J., Choi, Y.: Effects of network heterogeneity on social media on opinion polarization among south koreans: focusing on fear and political orientation. Int. Commun. Gaz. 82(2), 119–139 (2020) 15. Maleki, M., Arani, M., Buchholz, E., Mead, E., Agarwal, N.: Applying an epidemiological model to evaluate the propagation of misinformation and legitimate COVID-19-related information on twitter. In: Thomson, R., Hussain, M.N., Dancy, C., Pyke, A. (eds.) SBP-BRiMS 2021. LNCS, vol. 12720, pp. 23–34. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80387-2 3 16. NewsGuard: Newsguard homepage. https://www.newsguardtech.com/ 17. Page, E.S.: Continuous inspection schemes. Biometrika 41(1–2), 100–115 (1954) 18. Pennacchiotti, M., Popescu, A.M.: A machine learning approach to twitter user classification. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 5, pp. 281–288 (2011) 19. Perspective API: Using machine learning to reduce toxicity online. https:// perspectiveapi.com/how-it-works/ 20. Pierri, F., et al.: VaccinItaly: monitoring italian conversations around vaccines on twitter and facebook. arXiv preprint:2101.03757 (2021) 21. Rossetti, M., Zaman, T.: Bots, disinformation, and the first impeachment of US president donald trump. Plos one 18(5), e0283,971 (2023) 22. Singh, M., Bansal, D., Sofat, S.: Behavioral analysis and classification of spammers distributing pornographic content in social media. Soc. Netw. Anal. Min. 6, 1–18 (2016) 23. Strandberg, K., Himmelroos, S., Gr¨ onlund, K.: Do discussions in like-minded groups necessarily lead to more extreme opinions? deliberative democracy and group polarization. Int. Polit. Sci. Rev. 40(1), 41–57 (2019) 24. Tadesse, M.M., Lin, H., Xu, B., Yang, L.: Personality predictions based on user behavior on the facebook social media platform. IEEE Access 6, 61959–61969 (2018) 25. Tardelli, S., Nizzoli, L., Tesconi, M., Conti, M., Nakov, P., Martino, G.D.S., Cresci, S.: Temporal dynamics of coordinated online behavior: Stability, archetypes, and influence. arXiv preprint:2301.06774 (2023) 26. Trujillo, A., Cresci, S.: Make reddit great again: assessing community effects of moderation interventions on r/The Donald. Proc. ACM Hum. Comput. Interact. 6(CSCW2), 1–28 (2022) 27. Tufekci, Z.: Twitter and tear gas: the power and fragility of networked protest. Yale University Press (2017) 28. Van Bavel, J.J., Rathje, S., Harris, E., Robertson, C., Sternisko, A.: How social media shapes polarization. Trends Cogn. Sci. 25(11), 913–916 (2021)
Improved Change Detection in Longitudinal Social Network Measures Subject to Pattern-of-Life Variations L. Richard Carley(B)
and Kathleen M. Carley
Carnegie Mellon University, Pittsburgh, PA 15213, USA [email protected]
Abstract. This paper describes the challenges posed by pattern-of-life variations when carrying out automated detection of abnormal events (change detection) in longitudinal (over-time) social network data sets using standard social network measures. In this paper we present a new scheme for substantially removing pattern-of-life variations from longitudinal social network measures. This new approach is based on a model in which pattern-of-life variations are modeled as time-dependent periodic multiplicative weights on the likelihood of initiating a new post in a social network. Unfortunately, analysis of real-world social network data reveals that the time-dependent weights change over time as well. Therefore, an approach for adaptively determining the time-dependent periodic multiplicative weights has been developed. A complete methodology for Adaptive Multiplicative Compensation for Pattern-of-Life variations is described and the methodology is tested on a suitable social media data set. The impact of pattern-of-life variations on the test over-time data set is reduced by up to a factor of 4X by the algorithm presented. The impact on the occurrences of false positive events (labeling a time point as a “change” when it is not) and the impact on the occurrences of false negative events (labeling a time point as “normal” when it really represented a change) clear in the test data set. Keywords: Over-Time Social Networks · Change Detection · Social Network Measures · Pattern-of-Life variations · Fourier Analysis
1 Introduction Change Detection, in the case of groups of human beings, is the process of detecting when there is a relatively sudden break from the ongoing normal behavior [2]. Note, historically, change detection was first developed in the context of controlling the quality of manufactured goods (e.g., [1, 2]); but, in this paper we are only considering its application to observations of groups of humans and their interactions. The availability of data about the behavior of groups of humans has increased exponentially in the last 50 years as more and more of the interaction between humans are mediated through electronic devices that can record information about the social network structure and information being communicated (in some cases). For example, the use of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 328–339, 2024. https://doi.org/10.1007/978-3-031-53503-1_27
Improved Change Detection in Longitudinal Social Network Measures
329
e-mail as a significant method for communications within some groups of humans created data about which human sent what message to what other humans (the social network structure), when (over-time information), the text of the message and any attachments (the content of the message) [3]. The development of cell phones and later smart phones for communication between people created data about which human talked to which other human (the social network structure) and when (over-time information) and which human sent text messages to which other humans [4]. As a final example, consider the social media platforms of today. Again, they create data about which human sent what message to what other humans (the social network structure), when the information was posted (over-time information), the text of the message and any links to other messages, videos, or other websites (the content of the message) [5]. Because many social media sites employ relatively short message formats, the social network structure of the group is particularly important in human groups connected through social media platforms. Change Detection is important because changes in the observed social network structures may indicate changes within the underlying community of people and may even predict or indicate significant events or behaviors. The ability to systematically automate and rapidly detect such changes has the potential to enable the anticipation, early warning and/or faster response to significant events in society. “Change detection” has emerged as an important operation that can be performed automatically on many types of human social media data. Thus, there are huge economic and societal security benefits from being able to improve the accuracy of automated change detection algorithms, which is the focus of this paper. In the Background section, this paper begins by describing an example of the current state-of-the-art for automating change detection roughly following [3]. It then discusses the difficulties posed by pattern-of-life variations when carrying out this automated change detection on over-time social network data sets using standard social network measures and approaches that have been used to overcome these challenges. In the Approach Section, the motivation for adopting a periodic multiplicative weighting model for the impact of pattern-of-life variations on measures of over-time social network data are discussed. The need for a method for adaptively determining the periodic multiplicative weights is explained. Finally, the complete Adaptive Multiplicative Compensation for Pattern-of-Life Variations methodology is described. Then, an example longitudinal social network data sets is analyzed with and without the multiplicative compensation to illustrate the advantage of this approach. In addition, the prior art approach for compensating for pattern-of-life variations is applied to this same data set and the improvement is compared. In the Conclusions, the results are summarized and the limitations of the approach are reviewed.
2 Background 2.1 Automated Flow for Change Detection All of the social network data sets described in Sect. 1 are longitudinal data sets because they record the time at which each communication occurred. At any instant in time, there is either unconnected trivial social networks (one human connected to one other human)
330
L. R. Carley and K. M. Carley
or no social network (no one is connected). It is well known that in order to estimate the actual social network from time stamp data sets such as the ones mentioned above, it is necessary to aggregate over a time period in order to form a more complete and more accurate estimate of the true human social network [3, 6]. The most common approach is to aggregate over uniform (equal duration) time windows, and then to start the next time window where the current one ended (non-overlapped sampling); e.g., [12, 13]. This requires only that the duration of the Window and the precise instant at which the aggregation starts be specified. The result will be a human social network associated with each time window (a.k.a. time step). Once the aggregated social network structure in a given time window has been formed, various social network analytic measures can be calculated and associated with that time step [10]. For example, one of the simplest network measures that is widely used for change detection is the message traffic (# of messages posted in a time window). But, because the social network structure of the human group is so clearly exhibited in social media data sets, change detection should also consider changes in more complex networks measures; e.g., diameter, betweenness, etc. Change detection can be carried out on any weighted combination of the calculated measures. Or, change detection can be carried out on each measure separately and the resulting decision can be a logical function of the decisions of each of the individual change detection calculations on separate measures. Note, selection of the measure or set of measures is typically performed by having a subject matter expert review historical data to see which measure or combination of measures are the most reliable indicators of the type of “change” that is to be detected [8]. Once a measure or a set of measures have been selected, then a method for carrying out automated change detection must be selected. One of the most common is the Shewhart X-bar method [1, 2], which is based on a model in which “normal” variations of the process that generates the over-time measure data is modeled as a Gaussian white noise signal, characterized only by its mean and standard deviation. Then, by specifying the nominal probability of false-positive errors, any individual observation for the selected measure can be classified as either normal or as a change from normal. The key to Shewhart’s X-bar approach is that typically neither the mean nor the standard deviation of the measure is known. Hence application of Shewhart’s X-bar approach is typically applied to social media measures by using a finite number of prior time samples of the measure to calculate the sample mean and sample standard deviation, which is straightforward since the signal is assumed to be Gaussian white noise. Thus, another factor that must be specified is how many trailing samples should be used to calculate the sample mean and the sample standard deviation. One important note is that Shewhart X-bar method is often applied even when the measure in question does not exactly exhibit a Gaussian probability distribution. The primary deviation in the real distributions is that often the tails of the actual distributions are truncated relative to the tails of a Gaussian probability distribution resulting in the actual probability of false positive decisions being higher or lower than the one specified. Note, the choice of how many prior points to use in estimating the sample mean and sample standard deviation requires some experimentation. The larger the number of prior time points used, the more accurate the estimates are (assuming the assumption of
Improved Change Detection in Longitudinal Social Network Measures
Signal change
4
Observaon
331
Fit normal distribuon. Here = 0, = 1
2 0 0 -2
5
10
Time
Fig. 1. Illustration of Shewhart X-bar method. Estimate mean and standard deviation of underlying process for measure using the value of the measure at time points 2–10. Then see, time point 10 is identified as a “change” if time point 10 is more than X sample standard deviations away from the sample mean in either direction. X is chosen based on the specified acceptable probability of false positive errors. For time point 11, time points 3–11 would be used to estimate the sample mean and sample standard deviation.
Gaussian white noise is correct). But, choosing a very large number of points can cause false positive errors in cases where the distribution has a mean that changes over time. The operation of Shewhart’s method for Change Detection is illustrated in Fig. 1 below. Other common methods for change detection are variations on Shewhart’s method. For example, Fisher’s Scan Statistic [7] focuses on using likelihoods of future observations rather than assuming a Gaussian distribution. Exponentially Weighted Moving Average (EWMA) is similar to Shewhart, but it weights prior samples with a decaying exponential in terms of their contribution to calculating the sample mean and sample standard deviation [8]. EWMA is able to provide more accurate estimates of the mean and standard deviations with less likelihood of a false positive when there is a slowly changing mean. Cumulative Sum (CUSUM) control charts [9] are able to detect gradual changes in the mean by integration; however, since human social networks nearly always have some change in the mean over time, CUSUM is not as appropriate as Shewhart or EWMA for automated change detection on human social network data. In this paper, Shewhart’s method as described above will be used for determining when a “change” occurs. 2.2 Pattern-of-Life Variations While the method described above will do a good job performing automated change detection when the “normal” variations of the chosen measure are approximately statistically independent of time, it can be severely challenged when the “normal” variations
332
L. R. Carley and K. M. Carley
of the mean of the measure are periodically varying in time. For example, human social network data collected using the measure “# of posts/time window” often exhibits a strong cyclic variation with time of day because of the normal human diurnal rhythm. In general, such periodic variations in human social networks are called pattern-of-life variations. In nearly all human social network data, we find strong periodic variations in most measures with time of day, day of week, seasons, etc. And, when carrying out change detection, pattern-of-life variations can be much larger than the changes to be detected. In Shewhart’s X-bar algorithm, this results in a very large sample standard deviation relative to the desired change detection threshold, which results in a high false negative probability. One possible approach is to simply choose a time window for aggregation that is an integer multiple of the period of any pattern-of-life variations. For many social networks this would certainly require time windows for aggregation that were 1 or more days wide. And, because there are also changes in behavior on weekends, it might be necessary to choose time windows that were an integer multiple of 7 days long. Finally, there are seasonal variations in many social media network data sets, which would require that time windows be an integer multiple of years! Remember the motivation for developing the ability to systematically automate and rapidly detect changes in human social network data was because it offers the potential to enable the anticipation, early warning and/or faster response to significant events in society. That cannot be achieved with long time aggregation periods because a change event that occurs in the middle of a time window cannot possibly be detected until the end of that time window. This means that at a minimum, the average latency to detect a change event is half of the duration of the time window chosen for aggregation. For this reason, analysts are highly motivated to choose time windows that are shorter than the periods of typical time-of-life variations. Hence, the development of algorithms that can diminish the impact of pattern-of-life variations without requiring an increase in the time window duration is extremely important. Considering the measure “# of posts/time window,” it seems reasonable to assume that time could periodically modulate the likelihood of communicating, which would result in periodic variations in the # of posts. Note, other measures are also likely modulated by the overall change in traffic with time; however, measures that are normalized to the total traffic might not show such a pattern. The multiplicative model assumes that the propensity of humans to post a message on a social network is modulated by the time of day wherever they are currently located. If this propensity as a function of time of day were constant for all humans and across time, then this could be corrected by normalizing the # of posts by the propensity as a function of time of day or of the day of the week or of the season. This has the advantage that even if the mean # of posts/day were changing during the period of analysis, the correction would continue as long as the propensity vs. time did not change. Unfortunately, we carried out an experiment on a representative longitudinal Twitter dataset with strong daily variations in # of posts vs. time of day described in [6]. The measure chosen was # posts/time window and the time window chosen was 4 h. This measure vs. time is shown over a period of 30 days in Fig. 2, below. In this figure, it is clear that the distribution of # of posts vs. time of day did change significantly during the
Improved Change Detection in Longitudinal Social Network Measures
333
data record. In particular, the limited # of posts in the early part of the record means that the time-of-day affect was quite small while in the period in late October of 2020, there were a large # of posts in all 4 h time periods, which means that the relative propensity in the early record is different from the relative propensity in the late October period. This is a clear signal that we cannot remove the time-of-day variations with a fixed multiplicative weighting as a function of time-of-day. 2.3 Existing Approaches to Remove Pattern-of-Life Variations Using Fourier Transforms
Fig. 2. Over-time plot of the # of posts in each 4 h window for a Twitter data set over a 30 day period. Labels on the horizontal axis are the date and time stamps. The x axis begins at midnight on 14Oct2020 and the data ends at 1600 on 9Nov2020.
Fourier transforms have been used to identify dominant periodic behavior in overtime social network measures [3, 12, 13]. Figure 3 below shows the discrete Fourier transform (DFT) of the over-time data shown in Fig. 2. We can clearly see that there are some strong frequency components showing up in the DFT plot. There are two very large peaks in the DFT shown in Fig. 3. The one at a frequency of 0.1667 is the periodicity due to the time-of-day impact on the # of posts. The very large peak at 0 frequency is
334
L. R. Carley and K. M. Carley
because the original data has a positive mean since it consisted of only positive values and the broadening of that peak at extremely low frequencies is an indication that the measure is relatively small at the beginning and end of the period and much larger in the middle. Applying the methodology described in [3, 12, 13], one would select all DFT components with power above 400 and apply an Inverse DFT to them to generate a nominal pattern-of-life offset signal vs. time [13], which would be subtracted from Fig. 2 in order to create DFT “filtered” over-time data in Fig. 4. During the late October period in Fig. 2, when the activity in this data set is high, the pattern-of-life impact causes the signal to jump from values between 35 and 65 at its maxima down to values between 3–11 at its minima. The swing of the pattern-oflife variation in late October is 25–35 and the ratio of max/min ranges from 4–6X. We can compare this to the “filtered” over-time measure in Fig. 4. Here the pattern-of-life variation at the beginning and end of the period has actually increased to about a swing of 5 from min to max and dropped to a swing of about 15–25 from min to max in the late October period, roughly a 2X improvement the swing in Fig. 2 in late Oct.
Fig. 3. Discrete Fourier Transform (DFT) of the over-time data shown in Fig. 2. Horizontal axis is radian frequency and vertical axis is power at each frequency point. Note, the period of the spike at a frequency of 0.166 corresponds to once/day and 0.024 is once/week.
Improved Change Detection in Longitudinal Social Network Measures
335
Fig. 4. Applying the method from [3, 13] to reduce the pattern-of-life variations in the over-time values of the measure. Axes are the same as Fig. 2.
3 Approach In this section, the motivation for adopting an adaptive periodic multiplicative weighting model for the impact of pattern-of-life variations on measures of over-time social network data is discussed. The proposed methodology for Adaptive Multiplicative Compensation for Pattern-of-Life Variations is described. Then, the same longitudinal social network data set described in Sect. 2 is analyzed with the adaptive multiplicative compensation and the reduction in the pattern-of-life variations is compared with that described in Sect. 2. As was discussed in [3], applying a periodic multiplicative weighting seems like the most natural way to represent pattern-of-life variations in social network data sets. One can imagine that individuals have a generation rate for social network posts that is a constant based on their personality and what events have happened that would stimulate them to post messages. And, that a model for the rate at which individuals post messages would be a product of their natural propensity to post plus recent events with a time-of-day periodic function, which the period multiplicative time-of-day variation. 3.1 Adaptive Multiplicative Compensation for Pattern-of-Life Variations As we saw in Sect. 2, the periodic multiplicative variation representing time-of-day is not a constant throughout time. Therefore, we propose to estimate the periodic multiplicative variations using the data itself, much as was done when carrying out the Shewhart Xbar test. In that case, the mean and variance were estimated by using N previous time
336
L. R. Carley and K. M. Carley
samples of the measure. For implementing the proposed method, one starts by identifying important pattern-of-life variations that one wishes to remove; for example, using the Fourier techniques above. Let us assume for this example that time-of-day variations are the dominance pattern-of-life variations and that all others can be ignored. Note, in general, one can treat any number of pattern-of-life variations by applying this method to one at a time, starting from the most significant moving to the least significant. Assume that the pattern-of-life variation to be removed has a period of M time steps. Then one takes the K periods backward from the current sample, which will start with the current sample and will go backward in time to collect KxM samples. For the example social network data set, the measure is the # of posts in a time period, and since the time aggregation window was 4 h, for daily periods, M = 6. For example, let us pick K = 3. In this case, one analyzes the current data point and the 17 prior data points. Note, the larger K is, the more accurate the estimate for the periodic multiplicative values will be; but, the more sensitive the change detection will be to a gradually changing mean in the measure being analyzed. From our experience on multiple data sets, K = 3 is a minimum and K can be as large as 10. The “corrected” over-time waveform value is given by KM −1 ((x(i − n)) y(i) = x(i) n=0 K−1 n=0 (x(i − Mn)) where y values are the corrected waveform and x values are the original waveform. Note, if x(i) = 0, then y(i) = 0 even if the denominator also equals 0. The numerator of the fraction represents the sum of all values from the current value backward in time for M periods; and, the denominator represents the sum of the values from the same offset time within a period as the current sample across the M most recent periods. This ratio gives us the multiplicative fraction by which to weight data from this offset time with future periods in order to “correct” for the predicted pattern of life variations. Applying this multiplicative correction to the test data set resulted in a dramatically better reduction in pattern-of-life variations in Twitter data, even when the mean # of Tweets changed from day to day. We also explored applying this adaptive multiplicative correction strategy with both time-of-day and day-of-week variations being corrected. While the day-of-week variations were much smaller in this data set, it was clear that this technique can be applied to many periodic variation components of the pattern-of-life variations. 3.2 Results on Example Social Network Data Set The proposed adaptive multiplicative compensation algorithm was applied to the data set used in Sect. 2. To make the comparison clearer, the original data has been plotted with Shewhart X-bar thresholds of ±2.5 standard deviations in Fig. 5. As can be seen, Fig. 5 does pick up the initial rapid increase in activity. However, the large pattern-of-life variations dramatically increase the standard deviation after that such that subsequent events in the data set are missed. In Fig. 6, the same plot is shown for the adaptive multiplicative corrected data. First, there are still large swings in the data during the rapid uptick of activity, but the patternof-life correction catches up quickly and does an excellent job of removing most of
Improved Change Detection in Longitudinal Social Network Measures
337
the pattern-of-life variation. The max to min swing in has been reduced to the range of 5–10 from its original range of 25–35. This is significantly better than the reduction accomplished by the DFT correction described in Sect. 2 which reduced the swing to a range of 15–25.
4 Conclusions This paper presented a novel approach to canceling out pattern-of-life induced variations in over-time social network measure data sets. A real-world data set has been presented and analyzed using the measure number of posts in a time window. The amplitude of the pattern-of-life variations were reduced by a factor of roughly 4X, which is much better than the prior art Fourier approach which only achieved a 1.5X reduction. The significance of the reduction in change detection can be seen when comparing Fig. 5 and Fig. 6. The high residual pattern of life variations in Fig. 5 causes the 2.5 standard deviation threshold for the Shewhart X-bar detection to be much wider than the equivalent lines on Fig. 6. That means that the probability of a false negative (missing a change) is much higher. In fact, around time point 65 we can see a second event that happened in this data set [6] that is detected correctly by the adaptive multiplicative corrected data but that is missed when applying Shewhart to the original data. The paper demonstrates that on the selected Twitter data set, the proposed Adaptive Multiplicative Compensation for Pattern-of-Life variations works better than prior art methods suggesting additional research on it.
Uncompensated Data 80 60 40 20
-20
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106
0
-40 Fig. 5. Original measure data values (green) with the positive (red) and negative (blue) thresholds specified by Shewhart X-bar based on trailing 18 point with cutoff of ±2.5 standard deviations. Horizontal axis Time period 1 corresponds to Oct 23, 2020 starting at midnight ET and time period 105 corresponds to Nov 9, 2020 starting at 4 pm ET.
338
L. R. Carley and K. M. Carley
Adapve Mulplicave Compensaon 100 80 60 40 20
-20
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106
0
-40 -60 Fig. 6. Adaptive Multiplicative Corrected data values with the positive and negative thresholds specified by Shewhart X-bar as in Fig. 5. Sample times as in Fig. 5.
Acknowledgements. This work was supported in part by the Office of Naval Research (ONR) Awards N00014-21-1-2765 & N00014-21-1-2229. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ONR or U.S. government.
References 1. Shewhart, W.A.: Quality control. Bell Syst. Tech. J. 6(4), 722–735 (1927) 2. Shewhart, W.A.: Basis for analysis of test results of die-casting alloy investigation. Proc. Am. Soc. Test. Mater. 29, 200–210, app. 1 (1929) 3. McCulloh, I., Carley, K.M.: Detecting change in longitudinal social networks. J. Soc. Struct. 12(3), 1–37 (2011). http://www.cmu.edu/joss/content/articles/volindex.html 4. Striegel, A., Liu, S., Meng, L., Poellabauer, C., Hachen, D., Lizardo, O.: Lessons learned from the NetSense smartphone study. In: Proceedings of HotPlanet 2013, Hong Kong, China (2013) 5. McCulloh, I.A., Johnson, A.N., Carley, K.M.: Spectral analysis of social networks to identify periodicity. J. Math. Sociol. 36(2), 80–96 (2012). https://doi.org/10.1080/0022250X.2011. 556767 6. Chee, S.J., Khoo, B.L.Z., Muthunatarajan, S., Carley, K.M.: Vulnerable, threat and influencer characterisation for radicalisation risk assessment. Behav. Sci. Terrorism Political Aggression 1–19 (2023) 7. Aylmer, F.R.: Probability likelihood and quantity of information in the logic of uncertain inference. Proc. R. Soc. London 146, 1–8 (1934). https://doi.org/10.1098/rspa.1934.0134 8. Roberts, S.V.: Control chart tests based on geometric moving averages. Technometrics 1, 239–250 (1959)
Improved Change Detection in Longitudinal Social Network Measures
339
9. Page, E.S.: Cumulative sum control charts. Technometrics 3, 1–9 (1961) 10. Wasserman, S., Faust, K.: Social Network Analysis: Methods and Applications. Cambridge University Press, New York (1994) 11. McCulloh, I., Carley, K.M.: The link probability model: an alternative to the exponential random graph model for longitudinal data (Carnegie Mellon University Technical report ISR 10-130). Carnegie Mellon University, Pittsburgh, PA (2010) 12. Carley, K.M., Reminga, J., Storrick, J., Pfeffer, J., Columbus, D.: ORA User’s Guide 2013, Carnegie Mellon University, School of Computer Science, Institute for Software Research, Technical report, CMU-ISR-13-108 (2013) 13. Altman, N., Carley, K.M., Reminga, J.: ORA User’s Guide 2018, Carnegie Mellon University, School of Computer Science, Institute for Software Research, Pittsburgh, Pennsylvania, Technical report CMU-ISR-18-103 (2018)
Uncovering Latent Influential Patterns and Interests on Twitter Using Contextual Focal Structure Analysis Design Mustafa Alassad(B) , Nitin Agarwal, and Lotenna Nwana COSMOS Research Center, UA-Little Rock, Little Rock, AR, USA {mmalassad,nxagarwal,ltnwana}@ualr.edu
Abstract. The Contextual Focal Structure Analysis (CFSA) model is a sophisticated approach enhancing the discovery and interpretability of focal structure spreaders on social networks, similar to the users’ dynamic interactions on Twitter. Leveraging the power of the multiplex networks approach, the CFSA model organizes data into multiple layers, allowing for a comprehensive examination of various user activities and their interests within social networks. The core of the CFSA model uses the users-users network layer to capture the complex interactions between users and obtain a deeper understanding of users’ engagements on the platform. The CFSA model incorporates hashtag co-occurrence networks as the second layer; it helps unveil the associations and relationships between hashtags mentioned on Twitter. To evaluate the effectiveness of the CFSA model, the study focused on the Cheng Ho disinformation narrative within the Indo-Pacific region. This analysis utilized a substantial dataset comprising over 64,519 tweets and 20,000 hashtags collected between January 2019 and October 2022. The findings revealed users’ activities and the supplementary contexts established through their engagement with different hashtags. These insights shed light on the intricate interplay between users, communities, and the content that shapes the discourse within the Indo-Pacific region. Impactful contextual focal structure sets emerged as key elements driving the conversation in the examined disinformation narrative within the dataset. The CFSA model exposes significant patterns of popular hashtags such as “#SouthChinaSea,” “#NavyPartnerships,” and “#United_States”. Part of these hashtags were linked to accounts disseminating information concerning oil and gas exploration and drilling operations, mainly undertaken by the NATO alliances and China. Keywords: Indo-Pacific region · Cheng Ho · Multiplex Networks · Complex Network · Entropy · Contextual Focal Structures · Oil Fields Drilling · Natural Gas Operations · NATO
1 Introduction The Indo-Pacific region is currently experiencing extraordinary economic growth, positioning it as the fastest-growing economic region worldwide. In addition, its significance is expected to expand further in the future where this region will hold substantial importance for major global powers such as the United States of America, the European Union, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 340–353, 2024. https://doi.org/10.1007/978-3-031-53503-1_28
Uncovering Latent Influential Patterns and Interests on Twitter
341
and China [1]. Spanning a vast area, the Indo-Pacific region encompasses a multitude of countries with diverse cultural, economic, and geopolitical backgrounds. Notable countries within this region include Australia, Bangladesh, China, India, Indonesia, Japan, Malaysia, Pakistan, the Philippines, Singapore, Taiwan, Thailand, and Vietnam. In terms of economic activity, the Indo-Pacific region already contributes to more than one-third of the global economy, a share that is projected to surpass half of the global economy by the year 2040 [1]. Furthermore, this region comprises 65% of the world’s oceans and 25% of its land area [2]. The population growth expected in the Indo-Pacific region over the next two decades will generate significant demands in various sectors, including healthcare, education, agriculture and fishery, natural resources, energy, advanced manufacturing, and green infrastructure [1]. These demands will be driven by the growing population and the accompanying need for improved services and infrastructure to support sustainable development. In contrast, China, boasting the world’s largest trading volumes and economy [3], has adopted a stance characterized by contention and confrontation. China’s ascent as a dominant and influential power has been met with skepticism and posed challenges for nations within the Indo-Pacific region and beyond [4]. The intentions and actions of China seek to assert its authority and expand its influence using various tools like Navy fleets, national oil companies operations, and social network platforms such as YouTube, Twitter, and Facebook [5, 6]. In this context, the narrative of Cheng Ho is a form of disinformation strategically developed to counter allegations of China’s mistreatment of Muslim populations and to garner regional backing for China and its geopolitical agenda in areas such as the South China Sea and the Belt and Road Initiative [7]. The objective behind promoting this narrative is to rebuff the accusations of China’s oppression of Uyghur Muslims and to generate increased support for China’s strategic pursuits, including its assertive actions in the South China Sea. This research focuses on the spread of misinformation and disinformation on social networks. The main contribution of this study is the advanced identification of the focal sets of users on Twitter who exhibit unique structures and actively seek to influence individuals and communities in order to maximize the dissemination of information among nations in the Indo-Pacific region. The state-of-the-art research by Sen ¸ et al. (2016) proposed the Focal Structure Analysis (FSA) model [8], which aims to identify the smallest influential groups of users capable of maximizing information diffusion in social networks. Correspondingly, Alassad et al. (2019) introduced the FSA 2.0 model to enhance the discovery of focal structure sets and overcome limitations in the activities of influential users [9–12]. Their approach involved developing a bi-level decomposition optimization model that identified groups maximizing individual influence in the first and measured network stability in the second. However, both FSA and FSA 2.0 solely consider the activities of users in relation to other users. In other words, these two models do not provide information about other activities between users and actions taking place within the network. This paper introduces the CFSA model as an improvement to the existing FSA 2.0 model. The CFSA model seeks to enhance analysis by uncovering the context, interests, and activities of online users on social networks, such as Twitter, and it aims to provide a
342
M. Alassad et al.
deeper understanding of the dynamics and influences present within any social network. The CFSA model surpasses previous approaches by offering a more comprehensive and interpretable exploration of focal structure sets as presented in this research. The remaining sections of the paper are organized as follows: Sect. 2 presents the research problem statement, outlining the specific issue that the CFSA model aims to address. Section 3 provides details about the Twitter dataset that was utilized to evaluate the performance of the model. It also describes the overall structure of the methodology employed in this study. Section 4 presents the results obtained from the analysis. It concludes that the CFSA model produces interpretable and informative results. Additionally, this section highlights the findings and discusses the benefits of the proposed CFSA model vs. the FSA model in terms of enhancing the quality and interpretability of the FSA sets. Finally, in Sect. 5, the paper concludes by summarizing the key findings and contributions. It also acknowledges the limitations of the study and suggests potential directions for future research.
2 Research Problem Statement The primary objective of this proposed research is to emphasize the CFSA model within Twitter networks. By utilizing raw datasets obtained from the Twitter platform, the research aims to address the following problem: developing a systematic model that effectively incorporates the FSA 2.0 model [12] and the multiplex network approach. This model will uncover hidden patterns of activities, interests, and behavior within specific cultural contexts [13]. The approach involves several layers, such as user-mentions, hashtags-hashtags co-occurrence, and hashtags-users, collectively forming participation layers. Furthermore, this research introduces crucial concepts and analysis to explore inquiries such as the following: • Can conventional methods for community detection effectively identify influential individuals who engage in online discussions and unveil the distinctive contexts of various Twitter communities? • How does the CFSA model leverage the multiplex network to overcome limitations in available information? • Lastly, can the CFSA model organize information to facilitate the identification of user activities and provide insights into the motivations and interests of influential users? The subsequent section provides a detailed examination of the methodology and overall framework employed by the CFSA model.
3 Methodology In this model, three main components are presented. Initially, the data collection level gathered relevant context from Twitter, encompassing trending hashtags associated with events on the Twitter network. Moving on to the multiplex information matrix structure and CFS sets discovery level, we began with various layers within the multiplex
Uncovering Latent Influential Patterns and Interests on Twitter
343
network, such as users-mentions, hashtags-hashtags co-occurrence, and hashtags-users, represented in an adjacency matrix layer and the participation layer network. Finally, the third level involves the validation and analysis of the CFS set. This level involves three Ground Truth (GT) and Information Gain (IG) measures to calculate the degree of influence any given CFS set might employ across the entire network structure. 3.1 Multiplex Matrix Structure In general, the adjacency matrix of an unweighted and undirected graph G with N nodes in an N × N symmetric matrix is A = aij , with aij = 1 if there is an edge between i and j in G and aij = 0 otherwise. The adjacency matrix of layer graph Gα is nα × nα symmetric matrix Aα = aijα , with aijα = 1 only if there is an edge between (i, α) and (j, α) in Gα . Likewise, the adjacency matrix of Gβ is an n × m matrix ρ = piα , with piα = 1 only if there is an edge between the node i and the layer α in the participation graph, i.e., only if node i participates in layer α. We call it the participation matrix. The adjacency matrix of the coupling graph GF is an N × N matrix L = cij , with cij = 1 only if there is an edge between node-layer pair i and j in GF , i.e., if they are representatives of the same node in different layers. We can arrange rows and columns of L such that node-layer pairs of the same layer are contiguous, and layers are ordered to result that L being a block matrix with zero diagonal blocks. Thus, cij = 1, with i, j = 1, . . . , N represents an edge between a node-layer pair in layer 1(Users-Users) and node layer pair in layer 2 (Hashtag-Hashtag) layer if i < n1 and n1 < j < n2 . The supra-adjacency matrix is the adjacency matrix of the supra-graph GM . Just as GM , A is a synthetic representation of the whole multiplex M. It can be obtained from the intra-layer adjacency and the coupling layers in the following way: A = Aα ⊕α L
(1)
where the same consideration as in L applies for the indices, we also define A= ⊕Aα , which we call the intra-layer adjacency matrix. The CFSA model, built on a bi-level decomposition optimization problem, aims to maximize both the centrality values at the user level and the network modularity values at the network level. Figure 1 illustrates the multiplex network structure, which encompasses the User-User layer based on users-mentions and the Hashtag-Hashtag layer, while the participation layer represents the interconnections between users, mentions, and hashtags as the context in L. At the user level, Eq. (2) defines the objective function used to maximize the centrality values in the multiplex network. max
m n
δiUU ⊕ βijUH HH j
(2)
i=1 j=1
Here, n is the number of nodes in the users in layer UU; m is the number of nodes in the hashtag layer HH. δiUU is the sphere of influence for user i in UU. ⊕ is the direct sum. HH is the number of j hashtags in HH connected by an edge to user i in UU. Finally, j
344
M. Alassad et al.
βijUH represents the interconnection in L and the links between users and hashtags, where βijUH = 1 if and only if user i in UU has a link with hashtag j in HH; otherwise, 0. Cu represents the local clustering coefficient values utilized to measure the level of transitivity and the density of any user’s direct neighbors as shown in Eq. (3). Cu = tu /du
(3)
βijUH c∗,i ∀i, j
(4)
where tu is the adjacency matrix of the network A, du is the adjacency matrix of the complete graph GF with no self-edges as shown in Fig. 1. Equation (4) ensures the model considers the users to have edges in network A.
Fig. 1. Multiplex network construction.
Fig. 2. Multiplex layer structure.
In the network level, the model measures every set of users’ spheres of influence in the entire A network. This level is designed to measure the users’ impact when they join the A network. To measure the influence of the users identified from the user level, we utilized the spectral modularity method proposed in [14]. Furthermore, we utilized → c δi to transfer the users’ information between the user level a vector parameter − m×k k≤m and the network level. The network level was designed based on the following set of equations. jx = → ξjx = {− c δi
1 Tr ξjx AξjxT ∀j, x 2m
m×k k≤m
→ c δi ∪ δjx |−
m×k k≤m
Q
, = δjx } ∀j, x
μjx = max{1x , 2x , . . . , jx } ∀j, x Q
Cjx = δjx (μjx ) ∀j, x
(5) (6)
(7) (8)
Uncovering Latent Influential Patterns and Interests on Twitter
345
The objective in the network-level is to classify sets of users that maximize the spectral modularity value jx in each x iteration as shown in Eq. (5). Similarly, the model searched for the active set of users that would maximize the network’s sparsity as indicated in Eq. (6). A is the multiplex modularity matrix as shown in Fig. 2. In Eq. (6), → ξjx ∈ Rm×k is the union between the sets of users from the user level, − c δi , and m×k k≤m the candidates sets of users δjx that maximize the network’s modularity values. EquaQ
tion (7) calculates the spectral modularity valuesμjx . In Eq. (8), Cjx utilized to transfer Q
the results back to the user level, where δjx (μjx ) is the non-dominated solution that maximized the network’s spectral modularity values. CM jx is the set of users’ maximized modularity values when they joined the network and a vector parameter to interact with the user level and transfer the optimal solution from the network level to the user level. CM jx selects the contextual focal structure sets that gather all the criteria from both levels at each iterationx. 3.2 Data Collection To gather relevant data on the Cheng Ho narrative, an initial set of key phrases including “BRI,” “OBOR,” “China,” “Indonesia,” “Jakarta,” “Covid-19,” “Covid,” “Coronavirus,” “Uyghur women,” and “Uyghur muslim women” was collected. This data collection process took place at the COSMOS research center at UA Little Rock, USA. The collected data were stored in a MySQL database, and specific queries were executed against the database stored in different tables and segmented into columns depending on the content. To ensure that personalized tweets based on user history were avoided, the account used for the collection was not logged into, and all browser instances and cookies were cleared before each new crawl job. This data API operation was performed iteratively to obtain the most recent tweets resulting in a substantial dataset comprising over 64,519 tweets and 20,000 hashtags collected between January 2019 and October 2022, as shown in Table 1. Table 1. Twitter dataset. User-hashtags (UH), users-users layer (UU), hashtag-hashtag (HH), node (n), edges (e), Multiplex network A. Network Cheng Ho
UH
UU
HH
A
E
N
E
N
E
N
E
N
8557
5327
15972
10358
3206
3311
23095
17937
3.3 CFS Sets Validation and Analysis To evaluate the influence of the CFS, several validation processes were implemented. Three different Ground Truth (GT) measures of network stability, average clustering
346
M. Alassad et al.
coefficient, and modularity values [15] were employed to assess various aspects of the network, such as changes in community structures and user links, following the suspension of CFS sets. Additionally, the distance between all sets was measured, determining the uniqueness in each set and the amount of unique information each set delivered. For this, we utilized the Entropy Information Gain (IG) in the analysis. Finally, the ranking correlation coefficient (RCC) was implemented to identify semi-correlated solutions that show real-world scenarios (ρ > 0-ρ < 0.3). 3.4 FSA 2.0 vs. CFSA This study aimed to improve the identification and understanding of focal structures within social networks. Previously, Sen ¸ et al. (2016) employed a greedy algorithm as part of their advanced model to identify sets of focal structures in the social network [8]. Building upon this, Alassad et al. (2019) refined the process by introducing a decomposition optimization model FSA 2.0, to identify enhanced sets of focal structures [9]. Figure 3 shows two focal structure sets extracted from a Twitter network; the set in Fig. 3A was identified using the FSA 2.0 model, while the set in Fig. 3B was recognized using the proposed CFSA model. However, it is important to note that the focal set from FSA 2.0 merely illustrates the users’ activities without providing further details about any kind of activities between users. For instance, the node “balleralert” is connected to other nodes such as “Freedom,” but no additional information is available; thus, this shortcoming confines the depth of analysis. On the contrary, the CFSA model reveals distinct sets of users and sheds light on actions and online contextual activities between users. For instance, Fig. 3B shows users “balleralert” and “Nick” as actively connected, and support social movements, such as the Black Lives Matter hashtag “#BLM.” A remarkable connection is observed between “balleralert” and “liz,” where “balleralert” occupies a central position within this set. This link represents the intersection of two distinct communities; one set of users are #BLM supporters, and the other set of users includes COVID-19 lockdown supporters in the United Kingdom. The position of the user “balleralert” is between two different movements on Twitter, creating an ego focal set of users who support BLM in the United States and the UK’s second COVID-19 safety-related lockdown. Moreover, we compared the Run Time (R) and the complexity for both FSA 2.0 and CFSA models, where we run both models using a MacBook Pro with a 2.4 GHz 8-Core Intel Core i9 processor and 32 GB 2400 MHz DDR4 memory. Still, the R factor depends on the networks’ density values, the number of nodes (n) in the network, and the number of layers used in the multiplex network. The run time of the FSA 2.0 model to execute this experiment using a unimodular network (users-users layer) was (R < 9000 l ) where s). However, the run time complexity of the CFSA model is O(N × l zmax l denotes order or the number of layers in the multiplex network and N refers to zmax the total number of nodes. The run times between FSA 2.0 and CFSA show that the latter algorithm has a slightly higher execution time. Comparison of R between the FSA 2.0 and CFSA models shows that the latter experienced a slightly higher run time (R < 16000 s). Nevertheless, the CFSA model accepts multiplex networks, and the quality of the outcoming focal structure sets is better compared to FSA 2.0.
Uncovering Latent Influential Patterns and Interests on Twitter
A . Focal structure set using FSA 2.0 model.
347
B . Contextual focal structure set using CFSA model.
Fig. 3. FSA 2.0 versus CFSA model.
4 Results and Findings This study aims to present the benefits of the proposed CFSA model to enhance the quality and the discovery of the focal structure sets in the South China Sea dataset from Twitter. The case study implemented in this research was related to the Cheng Ho dataset from Twitter. The CFSA model identified 38 CFS sets in the multiplex network. These sets were different in size, number of users, hashtags, and community structures. Furthermore, as a case study, this research identifies one of the CFS sets to answer the question “What is going on between online users on Twitter?” with the most straightforward and smallest possible sets. For example, the CFS3 set, shown in Fig. 4, includes 32 users and 69 edges and is recognized as one of the most influential sets, mostly hosting the US’s influential user accounts involved in the Indo-Pacific conflicts. To name a few, there are the following: USNavy, USEmbassyPH, USPacificFeet, US7thFleet, and many others. Also, we noticed that CFS3 includes other users like Mark3Ds who have interesting instances with the highest average degree, 0.8 CAP scores, 2457 followers, and 155 friends. These users actively shared hashtags like #NavyPartnership, #China, #SouthChinaSea, #military, and #USA. Figure 4 displays the structure of the CFS3 set where users (gray dots) and hashtags (blue dots) from the Users-Hashtags network layer. The advantage of implementing the CFSA model is that it would reveal different desires between users in the identified CFS sets. In other words, the results present users’ interests on similar or different contexts (Hashtags) on Twitter. Moreover, to evaluate the influence of CFS sets, ground truth measures were utilized to measure the impact of each CFS set on the entire network. At this level, the model suspended each CFS set from the network and then recalculated the GT measures (described in Sect. 3) to record the rate of the changes in the structure of the network. For example, when the model suspended CFS3, this changed the structure of the network set (G-CFS3) to a complete sparse network, where this set maximized the network’s
348
M. Alassad et al.
Fig. 4. CFS 3 set is a contextual focal set.
GT Modularity values (GTMOD) from 0.7 to 0.957, and minimized the GT Network Stability (GTNS) of the network (maximized number of communities) from 72 stable communities to 640 fragmented communities. Similarly, suspending the CFS3 set from the network (G-CFS3) minimized the average GT Clustering Coefficient values (GTCC) from 0.0054 to 0.0021. Furthermore, to evaluate the quality of the other identified CFS sets, the employed method designed to suspend each CFS set recalculates the changes in the modularity values, the changes in the number of communities, and the average clustering coefficient values as shown in Fig. 5. Due to the space limit, we will skip presenting other changes in the multiplex network values. Additionally, from the GT analysis, we identified the top five influential sets based on each of the GTMOD, GTNS, GTCC values. The top five CFS sets had a greater impact on the Twitter network compared to other sets, as shown in the following results. • Top five CFS set based on GTMOD: (CFS3, CFS2, CFS8, CFS9, CFS35). • Top five CFS sets based on GTNS: (CFS3, CFS2, CFS24, CFS32, CFS36). • Top five CFS sets based on GTCC: (CFS3, CFS2, CFS7, CFS33, CFS37). To measure the Information Gain (IG) from each identified set, the IG method helps to calculate the amount of information each CFS set can deliver to the overall analysis and how much the gain values in finding these sets are. The process sets a target CFS i iteratively, where the model measures the uniqueness of the Information gained from each CFS set with respect to the target set CFS i . For this process, the model will change the target set CFS i each iteration where i = {1,2,…,5}, which refers to the top five select ed CFS sets from each GT measure. Table 2 shows the IG values the target was set on CFS3, CFS 13, 14 and 17 provide the most information values with respect to users and the shared content in each set. Table 2 shows the other side of the solution, presenting samples of content shared between the users in CFS 13,14, and 17. Moreover, Table 2 shows the nature of the discussion between users, and the information of the oil and gas field exploration and drilling operations in the South China Sea conducted by Japan, China, the USA, the UK, and the Philippines navies.
Uncovering Latent Influential Patterns and Interests on Twitter
349
Fig. 5. Changes in the network after suspending CFS sets and information gain values of CFS3.
4.1 Ranking Correlation Coefficient Values (RCC) Three experiments were applied to discover the correlated outcomes based on IG, GT, and IG versus GT values. Experiment 1: This experiment engaged in the assessment of RCC values among 38 CFS sets. This evaluation was based on three key parameters: IG_MOD, IG_C.C, and IG_NS. The results are depicted in Fig. 6. Notably, there exists a correlation between the values of IG_MOD and those obtained from IG_NS, with a computed RCC value of 0.0384. Experiment 2: This experiment assessed the RCC values for the top five sets. The results in Fig. 6, indicated an RCC value of 0.0983. This value shows the correlation between the top five CFS sets based on GT_C.C and GT_MOD for the top five CFS sets. Experiment 3: This experiment conducted an evaluation of RCC values for the top five CFS using IG values from Experiment 2 in comparison to GT values from Experiment 1. The obtained results illustrated the correlated relationships between the IG values and the corresponding GT values, as depicted in Fig. 6. The most strongly correlated values are represented in blue color in the figure. In summary, the model identified top five influential CFS sets based on three different experiments; for example, CFS3 showed an interesting structure as this set impacted the multiplex network. Moreover, this effort explored using multiplex networks and focal structure analysis characteristics to detect the CFSA sets from the South China conflict dataset collected from Twitter. In a methodological practice, this study extended the structure of the information matrix to reduce the complexity of the added information and help to interpret the contextual actions of the users and communities on Twitter.
350
M. Alassad et al. Table 2. Annotated users’ content in CFS 13, 14, and 17 sets.
UserID
Content
1652541
“JUST IN: China’s foreign ministry says Chinese navy warned U.S. ships sailing in South China Sea to leave; urges U.S. to stop such provocative acts. More here: https://t.co/XIe7Iw9NRi https://t.co/e3VmMgRXwB”
102722975
“Via @USPacificFleet: #USNavy, @IndianNavy, Japan Maritime Self-Defense Force, Philippine Navy sail together, conduct drills in South China Sea: https://t. co/v1orQjXU1i#NavyPartnerships @US7thFleet https://t.co/NYoIfnS0p7”
1953304932 “#SouthChinaSea geopolitical tensions. The @USNavy has stated that #USSNimitz and #USSRonaldReagan Carrier Strike Groups continue integrated exercises and operations while deployed to the South China Sea @US7thFleet #IndoPacific #USA https://t.co/FT9CuADZ1X” 3837500787 “U.S. holds naval exercises with allies in Asia amid China tension https://t.co/ d8ZhdLveeW #IndoPacific #US #PRC #CCP #Tension #SCS #Allies” 4061566936 “U.S., Britain conduct first joint drills in contested South China Sea - Reuters https: //t.co/ZEwpSkYtGT\n\n Explore China’s expanding claims and tactics in the #SouthChinaSea in our podcast, featuring @bill_hayton. https://t.co/IGx bHEu49D” 2313918458 “The UK should consider a permanent facility in the Indo-Pacific region in order to \‘deter foreign powers from undertaking aggression\’ in the South China Sea, according to a new report. | @hthjones for @UKDefJournal https://t.co/S76ys4 EFc3” 3837500787 “Executives of state-owned enterprises, officials of the Chinese Communist Party and military, along with oil giant CNOOC will face new restrictions for allegedly using coercion against states with rival #SouthChinaSea claims. #PRC #Sanctions #InvestmentBan https://t.co/nt7uEo1Ixn”
Fig. 6. Ranking Correlation Coefficient (RCC) values.
4.2 Theoretical Implications Initially, it explores the use of complex networks and focal structure analysis in detecting CFS sets on social networks. Secondly, this study expands on FSA 2.0 design incorporating additional information about users’ interests, tweets, and hashtags on Twitter. Also,
Uncovering Latent Influential Patterns and Interests on Twitter
351
this research provides a more comprehensive understanding of the dissemination of content by influential focal structure sets of users on social media. Finally, the study sheds light on the positive and negative aspects of the contextual activities of focal groups on social networks. 4.3 Practical Implications The focal structure analysis helps to suspend conspiracy theories on social networks instead of randomly selecting users or focusing on the most central users. This analysis identifies sets of accounts with significant structural influence in facilitating information flow. Also, this study establishes a positive correlation between multiplex networks, focal structure analysis models, and the disclosure of contextual activities and information dissemination by coordinating groups on social networks. Lastly, this study validates the efficacy of the CFSA model in distinguishing contextual activities beyond focal structures on social networks.
5 Conclusion This research combines the multiplex networks approach and bi-level decomposition optimization problem to reveal the interests of focal structure sets, that are responsible for information dissemination in social networks. A novel approach to revealing the interconnection layer consists of the union of edges between two layers (User-User layer and Hashtag-Hashtag layer). This union generates an interconnection users-hashtag network that includes the edges of the communications between online users on Twitter Network. The study reveals that influential sets of users are connected to popular trending contexts. The contents of utilized datasets were related to breaking events in the IndoPacific region between January 2019 and October 2022. The research findings suggest that the outcomes from the proposed CFSA model demonstrate the partisan behaviors exhibited by coordinating groups on social networks. This supports the hypothesis that the dissemination of information by influential users is limited, and it requires the coordination of influential sets of users to spread popular contexts or restrict their spread on social networks. Future work is dedicated to applying CFSA’s model to dynamic social networks and cross-platform scenarios to explore contextual focal structure sets across multiple social media platforms, including Facebook, Twitter, Instagram, and YouTube. Acknowledgment. This research is funded in part by the U.S. National Science Foundation (OIA-1946391, OIA-1920920, IIS-1636933, ACI-1429160, and IIS-1110868), U.S. Office of the Under Secretary of Defense for Research and Engineering (FA9550-22-1-0332), U.S. Office of Naval Research (N00014-10-1-0091, N00014-14-1-0489, N00014-15-P-1187, N00014-16-12016, N00014-16-1-2412, N00014-17-1-2675, N00014-17-1-2605, N68335-19-C-0359, N0001419-1-2336, N68335-20-C-0540, N00014-21-1-2121, N00014-21-1-2765, N00014-22-1-2318), U.S. Air Force Research Laboratory, U.S. Army Research Office (W911NF-20-1-0262, W911NF16-1-0189, W911NF-23-1-0011), U.S. Defense Advanced Research Projects Agency (W31P4Q17-C-0059), Arkansas Research Alliance, the Jerry L. Maulden/Entergy Endowment at the University of Arkansas at Little Rock, and the Australian Department of Defense Strategic Policy Grants
352
M. Alassad et al.
Program (SPGP) (award number: 2020-106-094). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding organizations. The researchers gratefully acknowledge the support.
References 1. Global Affairs Canada. Canada’s Indo-Pacific Strategy (2022). https://www.international.gc. ca/transparency-transparence/indo-pacific-indo-pacifique/index.aspx?langueng. Accessed 24 Mar 2023 2. House, T.W.: Indo-Pacific Strategy of the United States (2022). https://www.whitehouse.gov/ wp-content/uploads/2022/02/U.S.-Indo-Pacific-Strategy.pdf. Accessed Mar 2023 3. Power Shifts in the Indo–Pacific|Foreign Policy White Paper. https://www.dfat.gov.au/sites/ default/files/minisite/static/4ca0813c-585e-4fe1-86eb-de665e65001a/fpwhitepaper/foreignpolicy-white-paper/chapter-two-contested-world/power-shifts-indo-pacific.html. Accessed Mar 2023 4. China’s Rise and the Implications for the Indo-Pacific | ORF. https://www.orfonline.org/exp ert-speak/chinas-rise-and-the-implications-for-the-indo-pacific/. Accessed Mar 2023 5. Shajari, S., Agarwal, N., Alassad, M.: Commenter Behavior Characterization on YouTube Channels. arXiv preprint https://arxiv.org/abs/2304.07681v1 (2023) 6. Alassad, M., Agarwal, N.: Uncovering the dynamic interplay of YouTube co-commenter connections through contextual focal structure analysis. In: eKNOW 2023, The Fifteenth International Conference on Information, Process, and Knowledge Management, pp. 65–70 (2023) 7. China’s Columbus’ Was an Imperialist Too. Contesting the Myth of Zheng He|Small Wars Journal (2023). https://smallwarsjournal.com/jrnl/art/chinas-columbus-was-imperialist-toocontesting-myth-zheng-he. Accessed Mar 2023 8. Sen, ¸ F., Wigand, R., Agarwal, N., Tokdemir, S., Kasprzyk, R.: Focal structures analysis: identifying influential sets of individuals in a social network. Soc. Netw. Anal. Min. 6(1), 1–22 (2016) 9. Alassad, M., Agarwal, N., Hussain, M.N.: Examining intensive groups in YouTube commenter networks. In: Thomson, R., Bisgin, H., Dancy, C., Hyder, A. (eds.) SBP-BRiMS 2019. LNCS, vol. 11549, pp. 224–233. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-217419_23 10. Alassad, M., Hussain, M.N., Agarwal, N.: Developing graph theoretic techniques to identify amplification and coordination activities of influential sets of users. In: Thomson, R., Bisgin, H., Dancy, C., Hyder, A., Hussain, M. (eds.) SBP-BRiMS 2020. LNCS, vol. 12268, pp. 192– 201. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61255-9_19 11. Alassad, M., Hussain, M.N., Agarwal, N.: Finding fake news key spreaders in complex social networks by using bi-level decomposition optimization method. Commun. Comput. Inf. Sci. 1079, 41–54 (2019) 12. Alassad, M., Spann, B., Agarwal, N.: Combining advanced computational social science and graph theoretic techniques to reveal adversarial information operations. Inf. Process. Manag. 58(1), 102385 (2021) 13. Alassad, M., Agarwal, N.: A systematic approach for contextualizing focal structure analysis in social networks. In: Thomson, R., Dancy, C., Pyke, A. (eds.) Social, Cultural, and Behavioral Modeling, pp. 46–56. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-171 14-7_5
Uncovering Latent Influential Patterns and Interests on Twitter
353
14. Chugh, R., Ruhi, U.: Social media for tertiary education. In: Tatnall, A. (ed.) Encyclopedia of Education and Information Technologies, pp. 1–6. Springer, Cham (2019). https://doi.org/ 10.1007/978-3-319-60013-0_202-1 15. Girvan, M., Newman, M.E.J.: Community structure in social and biological networks. Proc. Natl. Acad. Sci. 99(12), 7821–7826 (2002)
Not My Fault: Studying the Necessity of the User Classification & Employment of Fine-Level User-Based Moderation Interventions in Social Networks Sara Nasirian1,2(B)
, Gianluca Nogara1 , and Silvia Giordano1
1 University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
[email protected], {gianluca.nogara, silvia.giordano}@supsi.ch 2 Department of Information Engineering and Mathematics, University of Siena, 53100 Siena, Italy
Abstract. Various users with diverse spiritual moods and morals react differently to online content. On some users, highly toxic submissions cannot have a noticeable impact, while on another group, even not severely toxic content may provoke them to stop their interactive participation in social media. The same is true toward the intervention of platform holders, in the sense that one punishing moderation may lead a user to respect the community rules to a certain point, while the same action may stimulate the violent and offensive reaction of another user. In that regard, the moderation interventions of the platforms should follow a fine-level nature and be based on user behavior. It should also try to protect the communities the user cares about the most as far as the user, in turn, respects the content policies. The aim of the current study is to classify users into various behavioral groups, which can potentially provide the chance to adopt more efficient moderative measures to protect the community and give the user the feeling that he really deserves the moderative intervention that has experienced. Thus, the behavior of the core users of an already-banned controversial subreddit was taken into consideration, and a machine learning-based classification strategy was imposed on their activity level and their submission toxicity scores. Results have revealed interesting behavioral differences between users of the Reddit social media toward the taken moderations and have indicated the necessity for adopting find-level measures for protecting the platform, as well as the different behavioral groups. Keywords: Human-centered computing · Social networks · User classification · Behavioral Differences · Moderation interventions · Toxicity
1 Introduction Effective interventions and various content and user moderation policies should be adopted by social networks to protect their users against hate speech, harassment, cyberbullying, misinformation, and other antisocial behaviors. Some of the already-adopted © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 354–365, 2024. https://doi.org/10.1007/978-3-031-53503-1_29
Not My Fault: Studying the Necessity of the User Classification
355
moderating interventions have some noticeable effects, however, as it is clear, there is still a long way to go to reach healthy online social communities. Obviously, the current moderation interventions of various social networks, such as quarantine, restriction, ban, suspension, and so on, get implemented only when the misbehaving users have tortured other users, and there is no “back-key” for protecting the already tormented users. In this case, minorities and other vulnerable populations can be considered illustrative examples of online harassment only because they express their opinions that harassers disagree with. This can result in increasing their anxiety for their safety and being more fearful of crime. Consequently, it can lead to a decrease in the likelihood of expressing themselves publicly or even systematic de-mobilization of victimized populations [1]. The other serious problem is that mostly instead of the guiltiest users, the whole community would be punished by a course-level community-based intervention instead of a fine-level user-based alternative. These rough decisions have significant side effects on both users and the platform. The effectiveness and the side effects of this coarse-level online content-moderating interventions, such as the ban, are under doubt. For instance, it is shown that banning a community prompts highly active users to leave the platform [2]. Evidently, losing the users, especially the most active ones, is the last thing that platform owners may wish for, since it negatively affects the platform’s profit. Furthermore, imposing coarse-level community-based moderative actions on all the users of a community may lead them to feel that they did not deserve the moderative action they have received or the unfair punishment they have received is just due to the inappropriate behavior of other community members and not themselves. Thus, this may even incite them to respond in unusual negative ways, driving the platform away from the goals it really heads to. All this highlights the necessity for the urgent devising and adoption of user-level fine-level interventions based on the user category. In this work, in addition to all the qualitative arguments against the current mitigation actions’ validity, we provide some quantitative results that reinforce the argument that the difference in the behavioral trend of the users urges the employment of fine-level user-based moderations. In this study, Reddit, which is one of the most famous news aggregation, content rating, and discussion social media, is selected as the study platform. The reasons behind this selection are numerous. First, Reddit is a platform designed for anonymity. Anonymity and non-identifiability might allow users to easily shrug off responsibility for their online communication [3], which in turn might provide this study with abundant instances of toxic behaviors. Second, Reddit prioritizes users’ feedback, when it comes to what it shows first, using its unique upvote and downvote system. Moreover, using the same system, the organic reach of user-generated content is not limited. It sifts through a myriad of posts, showing the upvoted content to many users. Third, the content-driven social credibility of Reddit, named as Karma system, is another unique feature of Reddit that distinguishes it from other platforms, which usually put a premium on followers, engagement rates, and network size to calculate the social credibility of the users. On Reddit, Karma is calculable by subtracting the total downvotes a user has received on its published posts and comments from its upvotes. Fourth, Reddit has strong communities for almost every type of interest group, called subreddits, in which users have the chance to submit and discuss content about the community’s topics and interests. Therefore, it is easy to find subreddits concentrated on topics that can fit our research the
356
S. Nasirian et al.
best. Furthermore, the majority of subreddits are public, which paves the way to access the content of the subreddits. Fifth, Reddit has various moderating options. A subreddit may get first quarantined in case of violating the guidelines and then banned only in case of persistent unacceptable behavior from users. These interventions seem to be not as efficient as they should be [4]. In the following sections, we are going to show that the current coarse-level interventions, employed by various platforms and in particular by Reddit, should be substituted with fine-level user-based (user-class-based) interventions. Considering the limitations of platforms for employing a large number of moderating measures, users with similar attitudes and behavior should be classified into the same user classes so diverse but limited number of user-based fine-level moderating interventions can be designed and appropriately imposed on each user, based on his own behavior and not the behavior of other users in a mutual community. Moreover, concerning the increasing online violence and hostility toward women [5], having in mind that violent rhetoric and misogyny are co-occurring [6], and being aware of the immense need for research to answer increasing misogynist behaviors [7], the current work is dedicated to investigating the effectiveness of the coarse moderative interventional such as quarantine and ban imposed on misogynistic online communities. In this regard, for this study, as the investigating community, r/Incels, a banned subreddit widely known for sexually explicit, violent, racist, and misogynistic content, is selected. Therefore, the current study is dedicated to finding an efficient user classification method, which can enable the provision of fine user- and behavior-based moderation interventions in various social media, in particular Reddit. We strongly believe that the adoption of any rewarding and punishing moderation by the platforms should be merely done based on the user behavior and not the community in which it participates. The rest of the paper is organized as follows: In Sect. 2, some user classification methods already used for employment in various social networks are summarized. Afterward, in Sect. 3, a new user classification methodology is proposed, and in Sect. 4, some numerical results are discussed. Finally, Sect. 5 concludes the paper.
2 Background An efficient user classification is an essential tool for responding to the platform’s goals of improving user experience through devising and imposing efficient and adequate finelevel user-based interventions and moderating acts. Therefore, the accuracy and validity of a user classification tool can directly impact the moderating interventions that a user may undergo and consequently on its general experience and satisfaction level. This can even encourage users to increase their activity on a platform or in its critical mode it can persuade a user to leave a platform or migrate to another platform. Recently, machine learning has been widely used in the classification of social network users since it helps to efficiently categorize the data of the users into different classes. Machine learning techniques also provide the possibility to detect peculiar patterns and anomalies in the data, and therefore lead to improving the classification of social network users. Below, some of the machine learning-based user classification methods are gathered. Although some of these methods are proposed with different goals than ours, studying their methodologies can be helpful in devising an efficient user classification scheme.
Not My Fault: Studying the Necessity of the User Classification
357
In a research study dedicated to the study of Twitter, features derived from profiles, tweeting behavior, linguistic content of messages, and social network information are considered useful and informative features [8]. The set of extracted features is then used with a supervised machine-learning framework for user classification. Gradient Boosted Decision Trees is used as the learning algorithm. The importance of the employment of age group classes as an influential parameter in sentiment intensity metric analysis and user classification studies is manifested in [9]. In the mentioned study, the authors have used DCNN as the machine learning algorithm for achieving an age group classification for social networks in general and Twitter in particular, where the exact age information is not provided by users. To do so, an extensive number of sentences were analyzed to divide the users into two groups, namely teenagers and adults, based on their writing style, users’ history, and profile. The results have shown that mere consideration of the age groups, instead of the real age, can lead to almost the same sentiment metrics. Therefore, this highlights the fact that the used parameters in this research can provide highly accurate outcomes for determining the age groups of Twitter users. Tavares et al. [10] have proposed a user account classification technique. This has provided the possibility to recognize the behavioral patterns of humans, bots, and cyborgs. Posting-frequency-dependent statistical parameters such as the average interval between posts in seconds, average posts per day, average posts per week, histogram of posts per hour, and histogram of posts per day of the week are used to distinguish the user type, in this study. A Twitter dataset and a set of learn-based classification algorithms have been employed. An acceptable accuracy has been achieved only through RF and XGBoost classifiers, which promises to improve the accuracy in case of being used in combination with traditional methods. Tinati et al. [11] have tried to find the key players of a conversation on Twitter using a social network graph based on retweet behavior. In that way, they have applied their model based on Edelman’s topology of influence for creating a network where the user interactions define their status and influence. The new scheme used by the authors, named topology of influence, is a model to identify different communicator roles on Twitter. In this research, users can be classified by examining the number of retweets a user has obtained, which can be considered a metric of influence. In this way, the users can be classified based on their sharing, amplifying, and curating roles. Since online users do not follow the same behavioral trend, following a coarse policy to react to their various online behaviors cannot be considered a good idea. Hence, several user clustering techniques, enabling more fine moderating policies in various platforms are reviewed in this section. In the following section, a new user classification scheme is presented which can enable the adoption of fine-level user-class-based moderation interventions in various social media, in particular Reddit.
3 Methodology In the previous sections, it is discussed that one-size-fits-all approaches, like coarse community-based moderations, cannot be considered the most rewarding type of interventions adopted by platforms. It is also discussed that considering the vast variety of
358
S. Nasirian et al.
user behaviors, the fine-level user-based schemes can better meet the goal of improving the healthiness of online social platforms. On the other hand, since it is not possible to design separate moderative actions for each user, classifying users in clusters based on the similarity of their behavior can provide the chance to design suitable moderating actions for each user class. Therefore, in this paper, it is tried to classify the users based on their most significant and critical behavioral qualities. Besides, it is tried to investigate the impact of the coarse community-based intervention in a highly controversial Reddit community and some other communities. Moreover, as already discussed, Reddit is selected as our study platform, mostly due to its strong community-based nature and the possibility of studying the effect of its currently adopted coarse community-based moderation interventions. Furthermore, the multitude of controversial Reddit communities with almost high toxicity-related scores have a strong appeal to computational social science researchers. Among all the highly toxic communities of Reddit, a handful of incel-based ones can be found that are either already banned, have undergone various limitations, or are about to be banned. There are many facts that can distinguish incel-based communities from many other highly toxic ones, but one of the most extreme instances of these differences is that Inceldom can even motivate mortal attacks in the real non-virtual world, like what has happened during the 2014 Isla Vista attack and the 2018 Toronto van attack, resulting in the deaths of six and ten people, respectively [12]. Generally, Incels, or involuntary celibates, are individuals who are suffering from a long-term lack of romantic success. The presented definition for an incel is as follows: an individual, having at least 21 years old and has unintentionally experienced at least six months without a romantic partner [13]. 3.1 Data Collection With the aim of studying the behavior of various users facing different events, such as the intervention of the platform holders, the historical data of the selected subreddit, namely r/Incels, is collected using the Pushshift tool, a service that provides historical Reddit data [14]. In line with Reddit’s policy against the incitement of violence, the subreddit was banned on November 7, 2017. At the time of its banning, r/Incels had around 40,000 subscribers. A few days before the ban the subreddit underwent the quarantine on October 27, 2017. With the aim of considering the most influential users of the mentioned subreddit, an observation population, selected and named Core users, has been monitored. For selecting the observing users, engagement is considered the most important factor in presenting the concept of core users in this work. In that way, all the users who have published at least once every four weeks (almost once a month) are considered as core users only if they have published content (whether posts or comments) more than the average number of submissions made per user. To find the core users, all the content published on r/Incels, from 210 days (30 weeks) before the quarantine date, March 31, 2017, until the quarantine date, October 27, 2017, has been collected, and the users who have satisfied the above-mentioned requirements are added to the list of core users. Following this approach, a list of core users, containing 99 users, has been put together. To examine the behavior of the core users in an immense manner, it is important to study their behavior all around Reddit social media. Therefore, the full list of their
Not My Fault: Studying the Necessity of the User Classification
359
activities in all the subreddits and not only inside the subreddit of interest, containing various submission types, namely posts, and comments, has been retrieved using the Pushshift tool. The retrieved pieces of information contain some information about the publication date, publication subreddit, published text, etc. To have a long enough observation period that can make the comprehensive study of user behaviors possible, the behavior of the users was studied in a period spanning 210 days (30 weeks) before and 210 days after the r/Incels ban date. As analyzed in some previous works, the choice of a ± 210-day study period around the ban date makes the division of a given time window around an intervention by groups of ten days or seven days possible [15, 16]. This can facilitate the analysis of the time series. Therefore, the data is collected inclusively from April 11, 2017 (210 days before the ban date) to June 5, 2018 (210 days after the ban date). Since the quarantine date and the ban date are close enough the observation period remains untouched. The data collection phase has resulted in the collection of a total number of 548786 comments and 30460 posts, among which 135573 submissions (posts or comments) were made by core users of which 124570 submissions remained available after the data cleansing phase. The remaining records were passed to Perspective API (https://www.per spectiveapi.com), a widely used public service that computes multiple toxicity-related scores for submissions, developed and released by the Jigsaw unit at Google [17]. As the output unhealthy score calculation, the scores for insult, profanity, toxicity, likely to reject, threat, sexually explicit, identity attack, and severe toxicity were computed, and we selected the toxicity score for the following phases, as most of the features present a similar variational trend. 3.2 User Classification Since each user has done multiple submissions per day, with the aim of keeping a per-day record of the user behavior and removing probable outliers, the daily median is calculated for each unhealthy toxicity-related score. Furthermore, with the aim of smoothing the data and removing the misleading high toxicity values resulting from short-term online misbehaviors, a 7-day (weekly) moving median is calculated for each score. We have noticed that there is an obvious similarity between the variational trend of the achieved unhealthy scores in the behavioral trend of user α. Similarly, the other core users show the same similarity trend between their toxicity scores, as if all the toxicity scores are almost quasi-duplicates of toxicity. Therefore, considering the crucial role of feature selection in the machine learning algorithms, some of these scores were not considered and only attained toxicity scores are used in the following steps of user clustering for what we call the refined toxicity data. This feature selection was done by keeping the important fact in mind that the extra complexity of the features can result in a very high chance of getting a multi-collinearity situation or a high correlation between two or more features. This in turn can negatively affect the training data and can lead to overfitting or under-fitting of the data. Anyway, although the unsupervised feature selection for the k-means clustering problem could be done based on the works proposed in the state of the art, it has been done in a simple manual way due to the clear similarity in the variational trend of the various extracted features, such as insult, profanity, toxicity, etc.
360
S. Nasirian et al.
In the next step, the achieved median daily submission value vectors of the core users are passed to the k-means clustering process as inputs for achieving user clusters accommodating users with similar posting trends, as output. In this case, k-means clustering, which is a method of vector quantization, partition n core users into k clusters. In this bottom-up approach, each user is assigned to the cluster with the nearest mean (cluster centers or cluster centroid). To choose the right number of clusters, the well-known elbow method is employed, in which, for each value of k, the value of Within-ClusterSum-of-Squares (WCSS) is calculated. Then by plotting a curve out of WCSS values versus the respective number of clusters the sharp point of bend, looking like an elbow joint, can be easily found. The respective amount of k is therefore considered as the number of clusters in the intended user clustering scheme. Accordingly, the k value is set equal to 3. Then, the refined toxicity data is passed to the k-means clustering process for organizing users into 3 clusters based on their previously achieved toxicity score vectors. At this point, two cluster identifiers have already been attributed to each core user. The first cluster identifier, denoted as IDS , is assigned to the user based on its daily submission vector, while the other cluster identifier, denoted as IDT , is assigned based on its refined daily toxicity vector. These two pieces of data are then merged, forming the input in the form of (IDS , IDT ), required for the second stage of the clustering, in which users are clustered based on their both submission trend and toxicity. In this step, the users are finally divided into 2 clusters, which is the cluster count achieved in this step by passing the merged cluster data to the k-means elbow finder. This two-step clustering method is useful to reveal clusters within the dataset of the study that would otherwise not be apparent. The employed two-step clustering gives better control over the handling of categorical and continuous variables. It also provides better scalability thanks to summarizing the records and facilitating the analysis of large data files. It is important to note that in this study, only the core users are considered the investigating society of the study, and therefore, a lower number of clusters are achieved in comparison with the real case in which the online platform holders should classify the whole society of users to enable a fine class- and behavior-based moderating intervention. Moreover, in the case of other subreddits or even other social communities, the achieving cluster count may be different. Anyway, an efficient clustering scheme with a large enough cluster count is pivotal for enabling user- and behavior-based moderations on any platform. In Sect. 4, it is possible to figure out how differently the users of these clusters have reacted to the coarse and community-based moderating interventions and what necessitates the adoption of fine behavior-based and user-based interventions.
4 Results As discussed, the core users are classified into two clusters, namely Red and Blue, based on their toxicity scores and their activity level in the whole Reddit social media, in a way that the Blue cluster contains 33.33% (33 users) and the Red Cluster contains 66.67% (66 users) of 99 core users. Figure 1 demonstrates the average number of submissions in each cluster both considering and neglecting the submissions done inside r/Incels. The Blue cluster users have
Not My Fault: Studying the Necessity of the User Classification
361
shown less activity, both inside and outside of r/Incels, while they were always engaged and active during the whole period of the study. This means that having a different activity pattern than the Red cluster members, Blue cluster members prefer to follow the content created by other users in a more silent way and avoid creating much content via submitting new posts or commenting. Moreover, as clear in Fig. 2, the Blue Cluster users manifest lower toxicity scores by a percentage of 24.93% compared to the Red Cluster users out of r/Incels. Therefore, the Blue Cluster can be considered the cluster that shows better behavior in other subreddits where the atmosphere is not as toxic as r/Incels. This means that their toxicity may not be provoked as easily as the other cluster, and they publish more acceptable content. As expected, in Fig. 3, the ban has slightly decreased the number of submissions out of the subreddit under study, by 12.9% and 11.12% for users of the Red and the Blue Cluster, respectively, and it only has slightly decreased the toxicity (6.80%) in the more aggressive cluster, namely the Red Cluster, as shown in Fig. 4. This negligible decrease in the toxicity level of the Red cluster in such a phase may be due to feeling guilty for leading their subreddit of interest to ban or feeling the risk of the permanent suspension of their account for violating the content policy. Unexpectedly, the Blue cluster, despite a more peaceful nature, has increased its amount of toxicity by 32.50% after the ban in comparison to the before ban (BB) observation period. This put in evidence the fact that the ban did not result in any significant toxicity decrease in any of the groups and just resulted in a decrease in the activity level of both clusters. The reduction in the number of submissions, and consequently the reduction in the engagement and participation in the platform can be considered an undesirable side effect of the r/Incels ban. Moreover, the subreddit ban deteriorates the toxicity status of about one-third of the core users’ population by a significant percentage. It may have incited anger among the Blue core users, who were not as culpable as Red core users in the ban of the subreddit and were not probably feeling as guilty as the Red core users in assisting in what has led to the ban of the subreddit. Therefore, they have started to show their anger in a toxic way in other subreddits and that is probably the reason why their toxicity level has increased outside of the subreddit of the study after the ban. The other cluster of core users, namely Red core users, being and feeling more guilty for leading the cluster to ban, may have decided to better mind their online behavior, seeing the consequence of their toxic submission and probably feeling the risk of getting their accounts permanently suspended for violating the platform content policy. Anyway, the deterioration percentage in the behavior of the Blue core users significantly overweights the trivial and probably short-lasting improvement in the behavior of the other cluster of core users. At this point, with the aim of understanding where this increased level of toxicity has emerged and studying how the ban of the r/Incels has affected other Reddit communities, we have investigated the behavior of all the core users regardless of the clusters to which belong (not just the blue cluster which has shown an increased level of toxicity) out of the subreddit of interest. The result of this part of the study can also verify if the employment of such coarse-level community-based moderation interventions can achieve the goals for which they were devised and employed.
362
S. Nasirian et al.
Fig. 1. Average number of posts and comments in each cluster during the whole observation period.
Fig. 3. Average submission number alteration in both Clusters in various phases considering/neglecting the subreddit data.
Fig. 2. Average of the toxicity scores in each cluster during the whole observation period.
Fig. 4. Toxicity scores average alteration in both Clusters in various phases considering/neglecting the subreddit data.
Figure 5 shows the top 8 subreddits in which r/Incels core users have made the highest number of contributions. As clear, the subreddits with the highest amounts of contributions from the r/Incels core users are mostly Incel-based subreddits, where the very same misogyny and heteropatriarchal subculture is present [18]. Among the top 8 subreddits, taking a closer look at the incel-analogous subreddits is necessary. To have a better idea of how big the toxicity and submission numbers are in these subreddits, we have tried to normalize the percentage of the mentioned metrics based on the values achieved from r/Incels. As clear in Fig. 6 and Fig. 7, r/Braincels has emerged as the most welcoming subreddit for the core users of r/Incels. r/Braincels was the most popular subreddit for involuntary celibates after the ban of r/Incels, established in October 2017, short before the complete ban of the r/Incels subreddit gaining 16,900 followers by April 2018, and getting banned in October 2019 for violating Reddit’s content policy against bullying and harassment after tolerating long-lasting restrictions. It even surpassed the maximum daily comment number of r/Incels just five months after its creation. Therefore, it seems that a great deal of the misbehavior for which r/Incels was banned was transferred completely untouched to r/Braincels. This substantial migration of the authors from one incel-based subreddit to the other is also confirmed in [12], where despite the employment of temporary, anonymous, and disposable accounts by many users to produce radical content, still a remarkable percentage of common accounts were detected.
Not My Fault: Studying the Necessity of the User Classification
363
Fig. 5. Top Subreddits with the Highest Amount of Contribution from the r/Incles Core Users.
Fig. 6. Percentage of the Submission Fig. 7. Percentage of the Toxicity Score in Number in each r/Incels-analogous Subreddit each r/Incels-analogous Subreddit to r/Incels. to r/Incels.
After the ban of r/Braincels, r/ShortCels, which has already been kept as a backup refuge for incels with a maximum of 384 comments per day, has taken the role of main refuge for incels and even surpassed the last recorded comment count of r/Braincels in February 2020, five months after the r/Braincels ban. Based on the intuition’s guide, one may find out that the ban is probably not the most efficient tool to be imposed on all the subreddit members since a five-month period is enough for the “subreddit recovery”, this time however, under a new name [12]. This instance can explain to some extent that an outright ban can be mostly considered a short-term band-aid that affects the entire community regardless of their behaviors and attitudes. Even softer interventions such as quarantining have manifested not to be very effective [19]. Generally, current one-size-fits-all moderating options have been proven to be unable to answer the emerging requirements of social networks. Established sociobehavioral theories have also deeply challenged the interventions that treat all the users in the same way [20]. Considering the personality-dependent blemishes such as misogyny and racism, non-user-based alternatives cannot be considered efficient solutions, which consequently result in the persistence of the real problem. Therefore, a possible solution
364
S. Nasirian et al.
could be using fine-level user-based and personalized alternatives instead of the coarselevel platform-centered approaches currently in use. By considering the unusual habits and personal characteristics of users, user-based moderation gives the platform holder the possibility to react in a refined, nuanced, and effective manner instead of punishing a whole community for the misbehavior of a limited number of users and sacrificing the popularity of their platforms.
5 Conclusions The persistence of unhealthy behaviors in online environments witnesses the inadequacy of the employed moderating intervention by platform holders. By magnifying the user behavioral difference in one of the most controversial communities of Reddit social media, the appropriateness of coarse-level community-based one-size-fits-all moderations is seriously doubted. One possible solution is the adoption of more fine-level and user–based moderative measures by platform holders. Keeping this aim in mind and considering the fact that the devised measures cannot be as numerous as the total number of users, efficient user clustering is required, which makes the proper clustering of users possible. This can be done by adopting tools that can act as touchstones for evaluating the online toxicity of the users and also by calculating their activity level. Clustering based on these criteria makes the adoption of fine-level group-based moderating interventions a possible option in online social environments, in which every user undergoes possible limitations or restrictions in a fair mode based on his own behavior and not the community in which he takes part. Consequently, relatively nontoxic users can experience a healthier, safer, and generally better online experience, thanks to the employment of a prompt, efficient, and fine-level moderating intervention by online platform holders.
References 1. Munger, K.: Tweetment effects on the tweeted: experimentally reducing racist harassment. Polit. Behav. 39(3), 629–649 (2017). https://doi.org/10.1007/s11109-016-9373-5 2. Thomas, P.B., Riehm, D., Glenski, M., Weninger, T.: Behavior change in response to subreddit bans and external events. IEEE Trans. Comput. Social Syst. 8(4), 809–818 (2021) 3. Miller, B.: “Dude, Where’s Your Face?” self-presentation, self-description, and partner preferences on a social networking application for men who have sex with men: a content analysis. Sex .Cult. 19(4), 637–658 (2015). https://doi.org/10.1007/s12119-015-9283-4 4. Trujillo, A, Cresci, S.: Make reddit great again: assessing community effects of moderation interventions on r/The_Donald. arXiv:2201.06455 (2022) 5. Marwick, A.E., Caplan, R.: Drinking male tears: language, the manosphere, and networked harassment. Fem. Media Stud. 18(4), 543–559 (2018) 6. Davies, L.: Gender, education, extremism and security. Compare 38(5), 611–625 (2008) 7. Farrell T., Fernandez, M, Novotny, J., Alani, H.: Exploring misogyny across the manosphere in reddit (2019) 8. Pennacchiotti, M., Popescu, A.M.: A machine learning approach to twitter user classification. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 5, no. 1, pp. 281–288 (2021) 9. Guimarães, R.G., Rosa, R.L., De Gaetano, D., et al.: Age Groups Classification in Social Network Using Deep Learning. IEEE Access 5, 10805–10816 (2017)
Not My Fault: Studying the Necessity of the User Classification
365
10. Tavares, G., Mastelini, S., Jr., S.: User classification on online social networks by post frequency. In: Anais do XIII Simpósio Brasileiro de Sistemas de Informação, Lavras, pp. 464–471 (2017) 11. Tinati, R., Carr, L., Hall, W., et al.: Identifying communicator roles in twitter. In: Proceedings of the 21st International Conference Companion on World Wide Web, pp. 1161–1168. ACM (2012) 12. Gothard, K.C.: Exploring Incel Language and Subreddit Activity on Reddit, UVM Honors College Senior Theses 408 (2020) https://scholarworks.uvm.edu/hcoltheses/408 13. Maxwell, D., Robinson, S.R., Williams, J.R., et al.: “A Short Story of a Lonely Guy”: a qualitative thematic analysis of involuntary celibacy using reddit. Sex .Cult. 24, 1852–1874 (2020) 14. Baumgartner, J., Zannettou, S., Keegan, B., Squire, M., Blackburn, J.: The pushshift reddit dataset. In: The 14th International AAAI Conference on Web and Social Media (ICWSM 2020), pp. 830–839 (2020) 15. Chandrasekharan, E., Jhaver, S., Bruckman, A., Gilbert, E.: Quarantined! examining the effects of a community-wide moderation intervention on reddit (2020) 16. Chandrasekharan E., Pavalanathan U., Srinivasan, A., et al.: You can’t stay here: the efficacy of Reddit’s 2015 ban examined through hate speech. In: Proceedings of ACM Human-Computer Interaction CSCW, vol. 1, Article 31, p. 22 (2017) 17. Rieder, B., Skop, Y.: The fabrics of machine moderation: Studying the technical, normative, and organizational structure of Perspective API. Big Data & Soc. 8(2) (2021) 18. Menzie, L.: Stacys, beckys, and chads: the construction of femininity and hegemonic masculinity within incel rhetoric. Psychol. Sexual. 13(1), 69–85 (2020) 19. Zannettou, S.: “I Won the Election!”: an empirical analysis of soft moderation interventions on twitter. In: The 15th International AAAI Conference on Web and Social Media, pp. 865–876 (2021) 20. Cresci, S., Trujillo, A., Fagni, T.: Personalized interventions for online moderation. In: The 33rd ACM Conference on Hypertext and Social Media (HT 2022), pp. 248–251. ACM (2022)
Decentralized Networks Growth Analysis: Instance Dynamics on Mastodon Eduard Sabo(B) , Mirela Riveni, and Dimka Karastoyanova Bernoulli Institute, University of Groningen, Groningen, The Netherlands [email protected], {m.riveni,d.karastoyanova}@rug.nl
Abstract. Federated social networks have become an appealing choice as alternatives to mainstream centralized platforms. In the current global context, where the user’s activity on various social networks is monitored, influenced and manipulated, alternative platforms that offer the possibility of owning and controlling one’s own data are of great importance. Mastodon stands out among decentralized alternatives in the fediverse. In this study, we conduct a time-based dynamics analysis of Mastodon instances within a specific period. Our results show a growth pattern of instances in terms of accounts in certain periods of time, and due to social events, reinforcing our assumption of it being already trusted as a decentralized platform. Our work holds significance in the wider context of studying and understanding the adoption and evolution of decentralized platforms as ethical alternatives to Big Tech platforms. Keywords: social networks · fediverse systems · privacy in networks
1
· Mastodon · decentralized
Introduction
The fediverse is a network of networks of federated servers, i.e., that operate in a decentralized manner. It mostly uses open-source software, and ActivityPub as a communication protocol, that is also a World Wide Web Consortium (W3C) [12] standard. Networks in the fediverse have come to the attention of the general public mostly in recent years, as they represent more ethical and privacyrespecting alternatives to centralized Big Tech networks. Monitoring, profiling, and privacy issues in general have existed on networks centrally controlled by companies, as data collection, and (third-party) data sharing is their main business model. Decentralized networks are seen as more ethical because they give the opportunity of owning and controlling one’s data, there is no data collection and sharing for profit. Thus, they are alternatives that currently respect the freedom from non-interference, as Coeckelberg talks about the same freedom issues in general about AI in [3]. This study is focused on the most popular and growing platform on the fediverse, Mastodon. Mastodon uses the aforementioned ActivityPub protocol, which provides a client-to-server API for content management and a server-to-server API. The c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 366–377, 2024. https://doi.org/10.1007/978-3-031-53503-1_30
Mastodon Instance Dynamics
367
protocol utilizes the term “actors” to represent users who have accounts on servers. Servers are often called “instances”. Each user has an inbox for incoming messages and an outbox for outgoing messages. Within the client-to-server architecture, notifications are published to the sender’s outbox, and to view them, the actors must request access. In federated server settings, notifications are dispatched directly to the intended recipient’s inbox, and only this setup offers subscription functionalities. Moreover, the sender needs a followers list, and non-federated actors must know all senders. This fosters a unique interserver communication environment where data is stored and shared selectively based on actor type and server federation [6]. Mastodons software is free and open-source. A user can establish an independent Mastodon instance, or register on one of the instances that offer account registration, and accounts on different instances can communicate with each other through a federated timeline. Three types of feeds can be accessed: a) Home feed, where the user can see posts from people they follow, b) Local feed, where the user can see posts from all people from the specific instance they belong to, c) the Federated feed, where the user can see posts from all other instances of Mastodon [8]. As opposed to relying on recommender algorithms to track users’ behaviour and show posts accordingly for increasing engagement as is in centralized platforms, these three types of feeds offer a more randomized selection of posts. Additionally, there is an Explore feed where the user can view the most popular posts by users in various instances. While most Mastodon instances are public, there are also private ones, where the owner of the instance can admit people to the server based on their discretion. Mastodon provides functionalities, such as posting, replies, favourites, bookmarks and hashtags. Mastodon saw a spike in popularity around November 2022, after the acquisition of Twitter and the switch of ownership in October 2022, a change that seems disliked by Twitter users [21]. The number of unique users on Mastodon almost doubled since the Twitter acquisition, from just a little less than 5 million in early November 2022 to 8.7 million in March 2023 [19]. We also prove this growth for some instances in this work. Our motivation is to study the adoption of these decentralized alternatives and the growth rate, investigating the following main research questions: – what insights about time-based dynamics and account creation growth-rates can we identify on Mastodon? – what insight do the centrality metrics give us regarding influential accounts on Mastodon? – what initial insight can we get regarding group and community structure on Mastodon? In this work, we have analyzed the effects of the recent influx of users and instances that affect the dynamics and structure of Mastodon, motivated by the opportunity to investigate how a social networking platform evolves during a period of rapid growth. We report a longitudinal analysis of several Mastodon instances and our insights into the structure of the largest Mastodon instance. The structure of the paper is as follows. In Sect. 2 we discuss related work, we
368
E. Sabo et al.
elaborate on our dataset and methodology in Sect. 3, Sect. 4 presents our analysis and results, and we conclude the paper, also discussing future work in Sect. 5.
2
Related Work
Research on Mastodon instance dynamics has been scarce but some recently conducted work exists since it’s a fairly new network and has existed since 2016. We will discuss some relevant work next. 2.1
Network Analysis of Mastodon
Zignani et al. conducted one of the first large-scale studies on Mastodon users in 2018 [23]. The research focused on analyzing the existing dataset from Mastodon at the time and studying the network growth and relationships between instances. They found differences with mainstream social media platforms, as users tend to follow other users and instances based on interests, rather than popularity. In 2021, more extensive research has been conducted by La Cava et al. in [8]. They built upon Zignani’s datasets and analyzed the data from that time, as Mastodon grew five times since the last big research on large sets of data. This new study reinforces Zignani’s findings and provides further insight into connections between users from multiple instances within the fediverse. La Cava researched network instances of Mastodon on a macroscopic and mesoscopic level and analyzed how these instances evolve. This study concluded that Mastodon has achieved “structural stability and a solid federative mechanism among instances” [8]. In 2022, La Cava et al. followed up their research by studying the relations and roles of the users in Decentralized Open Source Networks (DOSNs) [9]. They state that links between users on Mastodon are interest-based and not artificially stimulated, resulting in a decoupled network. The researchers found that there are two main roles the users can have on the platform: bridge or lurker. A bridge user is someone who is active in more than one instance and acts as a bridge between these instances. A lurker user is someone who rarely contributes on a Mastodon instance, but they are considered active users as they stay online on the platform and “consume information”. The aforementioned works report positive conclusions regarding Mastodon, such as its ability to enable community autonomy, technical development as a social enterprise, quality engagement, and niche communities [24]. 2.2
User Migration from Twitter to Mastodon
Zia et al. analyzed the migration of users from Twitter to Mastodon in the weeks of the Twitter acquisition. They found out that new users have mostly registered on a popular instance, such as mastodon.social. They explain this as a metric for centralization in Mastodon, as 96% of users are registered in the top 25% of biggest instances [22] and users do this because they are used to networking in large social media platforms, like Twitter. Moreover, they state that
Mastodon Instance Dynamics
369
only a fraction of users who migrated to the largest instance of Mastodon will migrate again to a more specific instance based on their preferences for the topic. Furthermore, Zia et al. identify the two main reasons why a user would leave Twitter: 1) Ideological reason - the user does not agree with the new company’s actions; 2) Following account reason - people they follow migrated there already. In 2023, La Cava et al. studied the user migration from Twitter to Mastodon using the same timeframe examined by Zia et al. The authors focus on the structure of the social media platform, the engagement of community members, and the language they use to communicate, all from the perspective of Twitter and what drives a Twitter user to migrate to Mastodon. The results are surprising, as they discovered that users networking in sparse communities are more likely to migrate to Mastodon [7]. Additionally, users who engaged in conversations on Twitter regarding popular migration hashtags, such as #TwitterMigration, have a greater tendency to migrate to Mastodon. The authors in [22] have found that one of the reasons for people adopting Mastodon is related to data control, because it provides the ability to control personal data, and consequently also more control over the use of personal data by (third-party) data mining activities, as opposed to Big Tech platforms, in which users have no control over their data, except for what is provided by legislation such as e.g., GDPR in the EU. Centralization tendencies in the context of account distribution on Mastodon are studied by the authors in [11], who report that 96% of users join 25% of the largest instances.
3 3.1
Methodology Dataset
To comprehensively analyze Mastodon, a thorough data collection strategy was essential. This involved data crawling, which facilitated the gathering of extensive and meaningful information about both accounts and instances of Mastodon. To accomplish this, a series of software crawlers were developed using C#, which were designed to interact with the Mastodon REST APIs [13]. It was essential to select a method that would permit access to the necessary data without the risk of modifying the content of the server. Upon execution of the GET methods, the resulting data was received in JSON format. The Mastonet package [10], a .NET library was utilized, for efficient handling of the Mastodon API methods in C#, as it’s specially designed to facilitate easy interaction with Mastodon APIs. The data was collected between 15th of April to 7th of June for the 5 instances, and the data for single-user instances was collected on 6th of August. 3.2
Ensuring the Validity of Collected Data
To ensure the validity and reliability of this research, cautious data cleaning and pre-processing steps were applied post-crawling. This included handling missing or inconsistent data, such as null values, and formatting the data appropriately
370
E. Sabo et al.
for further analysis. The gathered data involving certain account information, such as user ID, username, account name or display name is deliberately excluded from this work in accordance with privacy rules and our principles in protecting user identities. Even IDs were anonymized by adding noise. This approach is designed to enhance the transparency of our investigative results while ensuring that the privacy of a user is maintained. We give utmost importance to privacy, so we are not only guided by GDPR but with privacy as a human-rights principle. Our data served for an in-depth longitudinal examination of the evolution and progression of Mastodon as a DOSN. Table 1, shows a summary of the dataset: Table 1. Summary of data collected from all five instances Mastodon instance
Type of data collected
Amount of data collected
mastodon.social
general account information
87,210 accounts (8% of userbase)
mastodon.social
account following information 864,588 accounts
mastodon.cloud
general account information
125,329 accounts (50% of userbase)
mstdn.social
general account information
30,728 accounts (15% of userbase)
mastodon.online
general account information
23,885 accounts (13% of userbase)
mastodon.world
general account information
21,608 accounts (13% of userbase)
mastoturk.org
general account information
40 accounts
single-user instances general instance information
4
4908 instances
Dynamics Analysis and Results
This section investigates the analysis of gathered data from multiple Mastodon instances. Python was utilized for the analysis of the dataset. The NetworX and NetworKit libraries were used for analyzing the mastodon.social instance, focusing on gaining insights into centrality metrics, communities, and groups. NetworkX is a library that specializes in complex network and graph creation, manipulation, and analysis [4]. NetworKit is a comprehensive open-source toolkit that specializes in high-performance network analysis [20]. The analysis of the data was conducted on a personal computer, and Habrok - the high-performance computing cluster of the University of Groningen [2], which proved efficient in performing large-scale network analysis. 4.1
Account Number Dynamics
This section navigates through various data points acquired by evaluating the dates of account formations on numerous instances across different time markers/checkpoints. This investigation offers insight into the evolution, adaptation, and patterns of each five Mastodon instances from which general account information has been gathered, as explained in Table 1. The visualization of the data
Mastodon Instance Dynamics
371
throughout this study has been implemented using Python, specifically utilizing the Matplotlib [5] and Pandas [14] libraries. The graph in Fig. 1, offers a visual representation of the evolution of instances over the years since the year each instance was created. We found that there was a growth peak across all 5 instances in 2022. However, 2023 was marked by a big contraction for all instances, except mastodon.social. The latter kept almost a constant growth of users over the first 5 months of 2023. Figure 2 shows that all instances experienced user growth in November 2022, while for mastodon.social this was not significant. Furthermore, we see a continuous decrease in new accounts following December 2022 for all instances going into 2023.
Fig. 1. Yearly evolution of the number of registered users on 5 instances
Fig. 2. Monthly evolution of the number of newly registered users on five instances after the Twitter ownership change
4.2
Network-Influence Based on Centrality Metrics
We applied the three well-known centrality metrics to our dataset: Degree Centrality, Closeness Centrality and Eigenvector Centrality. The subsequent analysis
372
E. Sabo et al.
derived from each centrality measure facilitated the identification of accounts of high relevance and influence within the network, specifically those ranking in the top 20 of each list [17]. The centrality algorithms were applied to 87.210 mastodon.social accounts, that have a total following of 864,588 accounts, from 18.847 Mastodon instances. The Degree Centrality quantifies the number of connections or neighbours for an account. The account’s centrality increases with its number of connections. Accounts with a high degree centrality are typically those exhibiting high levels of activity or interaction within the network [17]. Closeness Centrality offers another perspective on node significance within a network, emphasizing the ’distance’ between nodes rather than just the quantity of connections [18]. The algorithm that was used for this metric is calculated as the inverse of the average of the shortest path between a vertex and all other vertices. Nodes with high Closeness Centrality have shorter average distances, enabling efficient information dissemination [17]. Regarding the research on mastodon.social, nodes with high Closeness Centrality play crucial roles in rapid information spread. Proximity to other nodes aids effective communication. Higher Closeness Centrality implies greater centrality within the digital community [17]. Eigenvector Centrality measures the importance of a node by the quantity and quality of its connections. It accounts for connections’ centrality. High Eigenvector Centrality involves connections with other highly central nodes, amplifying an account’s significance [18]. Accounts exhibiting high Eigenvector Centrality meet at least one of the following criteria: they have many connections, they are connected to important neighbours with high centrality or both. Centrality Analysis Results. Through an evaluation of the results from these centrality measures, we determined that a group of six accounts matched consistently across all three centralities, indicating their importance within the network. They were present in the top 20 accounts across all centrality measures. The frequency of these accounts across all three centrality measures acknowledges their significance within the mastodon.social network, and signifies considerable influence. Figure 3, shows the results of the centrality metrics. The six accounts that match across all centralities are labelled as ‘account number 1’ through ‘account number 6’. In order to protect the privacy of an account and adhere to the imposed security guidelines, as explained in Sect. 3.2, the original user IDs of the accounts within the top 20 of all centrality metrics are not shown. 4.3
Community Analysis Using the Louvain Algorithm
We also conducted a community detection analysis, utilizing the Louvain algorithm. The Louvain algorithm was developed for discovering communities in large networks with high modularity partitions. Additionally, it helps reveal the complete hierarchical community structure inherent in the network, providing different views in community detection [1]. The Louvain algorithm was applied to
Mastodon Instance Dynamics
373
Fig. 3. Centrality metrics results
the account following information dataset, which has a total of 864,588 accounts, from multiple Mastodon instances. Results of the Community Analysis. The results of the community analysis with the Louvain algorithm, concluded that there are 98 communities in the gathered data, which includes all the following accounts from the collected source users. Of the 98 communities, the community with the highest number of nodes has 236.477 nodes. Apart from this, many communities have below 100 nodes, the smallest ones have 2 nodes. To gain more insight into these communities a filter was applied based on several nodes, keeping only communities with more than 100 nodes. The results show 37 communities. From an initial dataset of 87.210 accounts from mastodon.social, we have found that users registered on it belong to communities pertaining to 18.847 unique instances, showing a big diversity of communities, as shown in Table 2. These results show that the users of mastodon.social are not stuck inside one instance, but they want to see posts and read opinions from users with different backgrounds on different instances. Within those 18.847 instances, all five instances from which we collected general account information are shown in Table 3, within the top 15 most popular instances in the results of the community analysis. The 6 accounts that appear within the top 20 of all centrality metrics as explained in Sect. 4.2 were searched within the obtained communities. The account that has the highest values among all centrality metrics, referred to as account number 1, belongs to the largest community. Furthermore, four users, namely account number 3, 4, 5 and account number 6 all belong to the second largest community, which has 160.374 nodes. The user referred to as account number 2 belongs to the third largest community, which has 104.886 nodes. These accounts belong to well-known people, such as the founder of Mastodon, a famous book author, a well-known Star Trek actor, a Washington Post tech reporter and the official Mastodon account. These accounts are influential based on centralities and community analysis.
374
E. Sabo et al.
Table 2. Type and Count of instances Type
Count
Number of unique instances
18.847
Table 3. Instance and Count Instance
Count
mastodon.social
321.717
Number of nodes
864.588
mstdn.social
25.494
Number of mastodon.social nodes
321.717
mastodon.online
19.245
Number of Non-mastodon.social nodes 542.871
mas.to
16.670
mastodon.world
14.387
fosstodon.org
4.4
9.387
pawoo.net
9.067
infosec.exchange
9.025
hachyderm.io
8.166
mastodonapp.uk
7.618
mstdn.jp
7.329
mastodon.cloud
7.240
troet.cafe
6.704
mastodon.lol
6.580
mastodon.art
6.412
Following Group-Based Analysis
This section illustrates the analysis of user groups present within the collected data from the mastodon.social platform, focusing specifically on the interaction patterns captured in the ‘following’ account data. The dataset that was analyzed is the account following information dataset, which has a total of 864,588 accounts, from multiple Mastodon instances. Initially, the structure of the dataset represented directed graphs, where each edge (source id and target id pair) corresponds to a ‘following’ relationship. It is important to note that in the original context, not all ‘following’ relationships are mutual. Out of an aggregate of 8.36 million edges in the dataset, 718.824 edges were extracted that represent associated ’following’ relationships, yielding 359.412 pairs of nodes. This subset, accounting for approximately 8.6% of the total number of edges, is the primary focus of this section of the analysis. Results of the Analysis of Groups. The dataset was transformed into multiple undirected graphs, each edge now signifying a mutual ‘following’ relationship between two nodes. This transformation facilitates the examination of ‘cliques’ within the collected data. The clique algorithm was applied only to undirected graphs with more than 2 nodes. These groups are characterized by a high density of internal connections, that are often of significant interest in the exploration of social dynamics, influence spread, and community detection. The results are shown in Table 4 Moreover, the top 20 nodes manifesting the highest frequency within the identified cliques were further isolated. The node exhibiting the greatest frequency appears in 1.338.364 cliques, which constitutes 63% of all discovered cliques and the node occupying the 20th position in this ranking appears in 483.960 cliques, corresponding to 23% of all cliques. A particularly fascinating discovery within this set of top 20 nodes is that all nodes represent users from a
Mastodon Instance Dynamics
375
Table 4. Summary of the group analysis
Type
Count
Total number of undirected graphs
443
Number of undirected graphs with more than 2 nodes
109
Size of the largest graph
49.399
Total number of cliques
2.130.032
Average number of cliques in graphs with more than 3 nodes 19.541,577981651375 Size of the largest clique
27
specific country who are active on mastodon.social. However, none of these users appears in the maximal clique of 27 nodes identified in the analysis. According to Netblock, a globally recognised internet monitor operating at the intersection of digital rights, cybersecurity, and internet governance, several social media platforms, such as Twitter, Facebook, and Instagram [15] were limited at the specific country point when we saw this peak. Upon conducting more research, there have been found no indications of any such restrictions imposed on the use of Mastodon. These findings suggest that a substantial number of tightly-knit user groups from the specific country have been active on mastodon.social until the spring of 2020. It is highly plausible that these groups migrated from the aforementioned restricted platforms, leveraging mastodon.social as an alternative platform. We do not want to reveal the specific country, for security reasons, but it is clear that at the point we saw the spike, the use of other platforms was restricted due to political reasons. 4.5
Analysis of Turkish Accounts
Following our discovery of users from a specific country described in Sect. 4.4, we decided to further pursue detecting groups of users from a certain country around a specific event. Thus, we looked into users from Turkey following the earthquake that occurred in February 2023. We retrieved 120 Turkish users over 6 instances, including the five initial instances and mastoturk.org. This natural event happened on the 6th of February, and 2 days after the earthquake, Twitter was restricted in Turkey [16]. Indeed, we found that there was a number of newly registered Turkish users on Mastodon, shown in Fig. 4. Firstly, there is the November surge in new users, because of the takeover of Twitter, followed by a small increase in registered users in February 2023, right after the earthquake. 4.6
Individual-Account Instance Analysis
We also conducted an analysis of single-user instances. In total, we have retrieved 4908 instances that have only one user. We collected data regarding instance creation and we found out that out of these 4908 instances, 1959 instances were created in November 2022.
376
E. Sabo et al.
Fig. 4. Monthly evolution of the number of newly registered users from Turkey
5
Conclusion and Future Work
The main objective of this study was to conduct a comprehensive study of Mastodon and its account growth. mastodon.social is the largest instance within Mastodon and provides the most comprehensive information among all instances. The collection of data proved to be a lengthy process, taking part over the course of two months. Although the gathered data does not encapsulate all accounts within an instance, its comparison with the information available on The Federation, a website for gathering statistics about nodes in the fediverse, matches the most important result, which is that instances from Mastodon encountered an increase in the number of newly registered users in November 2022. Upon evaluating the communities within multiple mastodon instances, we found that the accounts from a single instance interacted with accounts on more than 18.000 unique instances. These results confirm the significant diversity in a decentralized network. The study of groups within mastodon.social revealed millions of groups of users within less than 10% of the gathered data. This indicates that there are multiple closely connected groups of people that use Mastodon to network among themselves. Furthermore, this analysis determined that Mastodon is seen as a platform that provides a safe online environment in terms of freedom of expression and freedom from non-intervention. Our future work is focused on our interest in studying if eco-chambers exist in Mastodon.
References 1. Blondel, V.D., Guillaume, J.L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008(10), P10008 (2008) 2. Center for Information Technology, University of Groningen: Habrok highperformance computing cluster. Technical report, University of Groningen (2023). https://wiki.hpc.rug.nl/habrok/start. Accessed multiple times during June and July 2023 3. Coeckelbergh, M.: The Political Philosophy of AI: An Introduction. Wiley, New York (2022)
Mastodon Instance Dynamics
377
4. Hagberg, A.A., Schult, D.A., Swart, P.J.: Exploring network structure, dynamics, and function using NetworkX. In: Proceedings of the 7th Python in Science Conference (SciPy2008), pp. 11–15. Pasadena, CA USA, August 2008 5. Hunter, J.D.: Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9(3), 90–95 (2007) 6. Ilik, V., Koster, L.: Information-sharing pipeline. Serials Librarian 76(1–4), 55–65 (2019) 7. La Cava, L., Aiello, L.M., Tagarelli, A.: Get Out of the Nest! Drivers of social influence in the #Twitter migration to Mastodon (2023) 8. La Cava, L., Greco, S., Tagarelli, A.: Understanding the growth of the Fediverse through the lens of Mastodon. Appl. Netw. Sci. 6(1), 64 (2021) 9. La Cava, L., Greco, S., Tagarelli, A.: Information consumption and boundary spanning in decentralized online social networks: the case of mastodon users. Online Soc. Netw. Media 30, 100220 (2022) 10. Lacasa, G.G.: Mastonet (2017). https://github.com/glacasa/Mastonet. Accessed 20 Apr–7 June 2023 11. Lee, K., Wang, M.: Uses and gratifications of alternative social media: why do people use mastodon? (2023) 12. Lemmer-Webber, C., Tallon, J., Shepherd, E., Guy, A., Prodromou, E.: ActivityPub. W3C Recommendation 20180123, World Wide Web Consortium, January 2018. https://www.w3.org/TR/activitypub/ 13. Mastodon: Mastodon API documentation. https://docs.joinmastodon.org/ methods/. Accessed 20 Apr–7 June 2022 14. McKinney, W.: Data structures for statistical computing in Python. In: van der Walt, S., Millman, J. (eds.) Proceedings of the 9th Python in Science Conference, pp. 56–61 (2010) 15. Netblock.org: Twitter, Facebook and Instagram restricted in Venezuela on day of planned protests, November 2019. https://www.netblock.org/. Accessed 15 July 2023 16. Netblock.org: Twitter restricted in turkey in aftermath of earthquake, February 2023. https://netblocks.org/reports/twitter-restricted-in-turkey-in-aftermath-ofearthquake-oy9LJ9B3. Accessed 28 Aug 2023 17. Newman, M.: Measures and metrics. In: Networks. Oxford University Press, July 2018 18. Riveni, M.: Mathematics of networks - centrality. Lecture notes. Social Network Analysis, University of Groningen (2022) 19. SocialHome: The Federation (2023). https://the-federation.info/. Accessed 30 Mar 2023 20. Staudt, C., Sazonovs, A., Meyerhenke, H.: NetworKit: an interactive tool suite for high-performance network analysis. CoRR abs/1403.3005 (2014). https://arxiv. org/abs/1403.3005 21. Stokel-Walker, C.: Twitter may have lost more than a million users since Elon musk took over. MIT Technol. Rev. 1 (2022) 22. Zia, H.B., He, J., Raman, A., Castro, I., Sastry, N., Tyson, G.: Flocking to mastodon: tracking the great Twitter migration (2023) 23. Zignani, M., Gaito, S., Rossi, G.P.: Follow the “Mastodon”: structure and evolution of a decentralized online social network, pp. 541–550 (2018) 24. Zulli, D., Liu, M., Gehl, R.: Rethinking the “social” in “social media”: insights into topology, abstraction, and scale on the mastodon social network. New Media Soc. 22(7, SI), 1188–1205 (2020)
Better Hide Communities: Benchmarking Community Deception Algorithms Valeria Fionda(B) DeMaCS, University of Calabria, Rende, Italy [email protected]
Abstract. This paper introduces the Better Hide Communities (BHC) benchmark dataset, purposefully crafted for gauging the efficacy of current and prospective community deception algorithms. BHC facilitates the evaluation of algorithmic performance in identifying the best set of updates to apply to a network to hide a target community from community detection algorithms. We believe that BHC will help in advancing the development of community deception algorithms and in promoting a deeper understanding of algorithmic capabilities in applying deceptive practices within communities.
Keywords: social networks
1
· community deception · benchmark
Introduction
The detection and analysis of communities within networks have garnered significant attention across diverse fields, as they offer utility in a wide array of scenarios, spanning from social networks [24] to biological systems [6,7] and online platforms [8,19]. Community detection algorithms hold pivotal importance in revealing concealed patterns and understanding the underlying structure of complex networks. Nevertheless, the conflicting problem of community deception [9,23] has recently caught the attention of the research community. Community deception aims at hiding a specific community from detection algorithms by generating deceptive connections or profiles (nodes), as well as eliminating existing connections to manipulate or mislead the process of identifying the targeted community. Community deception finds application in several situations, such as underground organizations, discreet online collectives, or networks of political dissidents. Despite the abundance of datasets for traditional community detection, the lack of representative datasets for community deception is a significant hurdle in this research area. Indeed, although several community deception algorithms have been proposed, there is no way to fairly compare and evaluate their performance and robustness. We believe that the availability of a reference benchmark dataset is critical to foster the development of novel techniques and facilitate the comparison of different approaches. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 378–387, 2024. https://doi.org/10.1007/978-3-031-53503-1_31
Better Hide Communities
379
In this paper, we propose a benchmark dataset specifically designed to address the challenges of evaluating community deception algorithms. The dataset, named “Better Hide Communities” (BHC), serves as a reference resource for assessing the effectiveness of existing and perspective algorithms for community deception. BHC has been designed by keeping in mind the following key points: • Network Heterogeneity: The dataset includes diverse network structures that ensure that algorithms evaluated using BHC can handle different types of networks effectively. • Community Detection Algorithms: BHC considers a comprehensive set of community detection algorithms that are commonly used to discover communities. Such detection algorithms are used as adversarial to identify the best network editings for the hiding of a given community. • Evaluation Metric: BHC considers the Deception Score [9] as the evaluation metric. The deception score captures reachability preservation among target community nodes, community spread of the target community after network modification, and community hiding of the members of the target community in large communities. • Optimal Deception Strategy: For each community (discovered on each network by each detection algorithm) BHC provides the optimal set of network link modifications (link additions and deletions) that allow to better hide the community from a given community detection algorithm. These optimal strategies enable precise evaluation and comparison of the performance of different community deception algorithms. The proposed benchmark dataset fills a crucial gap in the field of community deception algorithms by providing a standardized platform for evaluation. We believe that this dataset will foster innovative approaches, encourage collaborations, and ultimately pave the way for advancements in the field of community deception research.
2
Related Work
Community deception [9] or hiding [23] studies how to hide a target community C inside a community structure from community detection algorithms. The idea is to find the best (deception-wise) set of edge updates by optimizing some functions. The main body of research on this topic works on undirected networks. Nagaraja [15] proposed a method to contrast detection and hide a community by adding a certain number of edges. Nodes involved in edge addition are chosen by considering some vertex centrality measures such as degree centrality. Waniek et al. [23] and Fionda and Pirr` o [9] devise deception optimization functions based on modularity [16]. The underlying idea is that several community detection algorithms exploit the value of modularity, the higher, the better. Then, applying edge updates to the network to minimize modularity should mislead community detection algorithms. Safeness-based deception [9] has been introduced to
380
V. Fionda
correct for some drawbacks of modularity-based deception. In particular, with modularity-based deception, one needs to know the entire network and community structure to identify the best set of edge updates while Safeness-based deception only requires information about the target community members. Mittal et al. [14] devised a deception strategy called NEURAL aiming at reducing the permanence of the network for the target community. Permanence [4] is a vertex-centric metric that quantifies the containment of a node in a network community. Recently, some researchers proposed deception algorithms working on directed networks [11] and networks with attributes [12] or investigated the slightly different problem of hiding the entire community structure [13].
3 3.1
Preliminaries Community Deception
The goal of community deception is to design algorithms to deceive community detection algorithms. In particular, given a community C , the goal is to determine a set of β edge updates so that C will not be discovered by community detection algorithms. A network G = (V, E) is an undirected graph that includes a set of n = |V | vertices and m = |E| edges. We denote by deg(u)=|{(u, v) ∈ E}| the degree of u. The set of communities (i.e., a community structure), discovered by some community detection algorithm A is denoted by C = {C1 , C2 , ..., Ck } where Ci ∈ C denotes the i-th community. Given a community Ci , we distinguish between intra-community edges and inter-community edges. The set of intra-community edges E(Ci ) is the set of edges of the form (u, v) : u, v ∈ Ci , where both endpoints are members of Ci . The set of inter-community edges C˜i is the set of edges of the form (u, v) : u ∈ / Ci , where one of the endpoints is external to Ci . Given a community Ci , v ∈ i , u)) the set of Ci and a node u ∈ Ci , we indicate by E(Ci , u) (resp., E(C intra-community (respectively, inter-community) edges of u. The degree of a community is denoted by: δ(Ci )= u∈Ci δ(u), where δ(u) is the degree of node v. Given a network G = (V, E), we indicate by E + and E − the set of edge additions and deletions, respectively, that should be applied to G. 3.2
Deception Score
To build our benchmark dataset we used the deception score to find the best set of edge updates for each community. The deception score introduced in [9] provides a quantitative assessment of how much a target community C has been discovered with respect to a community structure. In particular, given a community C and a community structure C = {C1 , C2 , ...Ck } found by some community detection algorithm, the deception score H(C , C) provides a value in the range [0, 1] taking into account three aspects: (i) the mutual reachability of C members’ (via other members of C ); (ii) C members’ spread in the various communities; and, (iii) how large such communities are. Formally, the deception score is defined as follows:
Better Hide Communities
H(C , C) =
381
1 |S(C )|-1 1 Ci ∩C =∅ P(Ci , C ) ) 1× (1- max {R(Ci , C )}) + (1|C |-1 2 2 |Ci ∩ C = ∅| Ci ∈C
where |S(C )| is the number of connected components in the subgraph induced by C s members; R is the recall of a detection algorithm A with respect to a target community C defined as R(Ci , C ) = # C ’s members|Cin| Ci found by A ∀Ci ∈ C; P is the precision, defined as P(Ci , C ) = C = ∅.
4
# C ’s members in Ci found by A |Ci |
∀Ci ∩
Better Hide Communities: A Benchmark for Appraising Community Detection Algorithms
In this section we present the Better Hide Communities (BHC) dataset and the results of optimal community deception. Code and data can be downloaded from https://github.com/vfionda/BHC. 4.1
Networks
BHC consists of a dataset of networks with 20 ∼ 40 nodes with an average degree 2 ∼ 3. In these networks we were able to compute the optimal solution, in terms of deception score, by evaluating all possible subsets of edge modifications (additions and deletions) with a budget of 1 ∼ 3. The BHC dataset is composed of real and synthetic networks. As for real networks, we considered Zachary’s karate club [25], a social network of a university karate club, reporting the interaction of pairs of members outside the club, made of 34 nodes and 78 edges. As for synthetic networks, they have been generated by using the NetworkX Python library. In particular, we generated Erd¨ os-R´enyi and Barab´ asi-Albert graphs having the following properties: • Erd¨ os-R´enyi: We generated two networks by specifying the number of nodes and edges to obtain a specific average node degree. In particular, we generated two Erd¨ os-R´enyi networks having respectively 20 and 50 nodes and 40 and 100 edges, thus with average node degree k = 2.5. • Barab´ asi-Albert: We generated two networks by using Barab´ asi-Albert preferential attachment by specifying the number of nodes and edges attached to each node. In particular, we generated two undirected Barab´ asi-Albert networks having respectively 20 and 40 nodes and 36 and 76 edges.
4.2
Detection Algorithms
For building the BHC benchmark we considered seven deception algorithms available in the CDlib library [3], as reported below:
382
V. Fionda
Algorithm 1. BHC benchmark generation 1: function BHCGeneration(G, budget, A ) C = A (G) 2: 3: res = [] 4: for C ∈ C do 5: intraDel =list intra-C deletion 6: intraAdd =list intra-C addition 7: interDel =list inter-C deletion 8: interAdd =list inter-C addition 9: for β ∈ {1, .., budget} do 10: maxDec = 0 11: bestM ods = ∅ 12: for mods ∈ cmb(intraDel ∪ intraAdd ∪ interDel ∪ interAdd, budget) do 13: G =apply network editing in mods to G 14: for i = 1 to 5 do C = A (G ) 15: 16: dec = computeDeceptionScore(C , G , C ) 17: if dec > maxDec then 18: maxDec = dec 19: bestM ods = mods 20: res.append((C , β, bestM ods, maxDec)) 21: return res
• Leiden [22] (leid): an improvement of the Louvain algorithm [1], a multi-level modularity optimization algorithm; • WalkTrap [18] (walk): based on the idea that random walks are more likely to stay in the same community; • Greedy [5] (gre): based on a greedy modularity maximization strategy; • InfoMap [20] (inf): it returns a community structure, which provides the shortest description length for a random walk; • Eigenvectors [17] (eig): Newman’s leading eigenvector method for detecting community structure based on modularity; • Paris [2] (par): a hierarchical graph clustering algorithm inspired by modularity-based clustering techniques; • Combo [21] (cmb): a modularity maximization implementation of the community detection algorithm called Combo. 4.3
BHC Benchmark Generation Procedure
Algorithm 1 shows the procedure implementing the exhaustive search approach used to discover the set of network updates allowing to obtain the maxim deception score for a given network G and detection algorithm A . Such a procedure, after computing the community structure C obtained by applying A to G (line 2), iterates over all the communities in C (line 4) to compute (for each budget of updates - line 9) the deception-wise best set of updates. In particular, after computing all the possible network modifications (lines 5–8), it tries all the
Better Hide Communities
383
possible combinations (cmb function) of network editings for a given budget of updates (line 12). After applying a set of updates to the original network (line 13), the procedure recomputes the community structure (line 15) and the deception score (line 16). To handle the non-determinism of some detection algorithms (e.g., Leiden), community structure computation and, consequently, deception score computation, is taken as the best of five runs (for-cycle at line 14).
Fig. 1. Deception score for all networks and communities for detection algorithms Combo, Greedy, Leiden and WalkTrap for budget β ∈ {1, 2, 3}.
4.4
Discussion
In this section we present the results of optimal deception for the networks and detection algorithms in the BHC dataset, accompanied by some general considerations. Figure 1 reports the value of the optimal deception score (up to 3 network editings) for the detection algorithms WalkTrap (Fig. 1 (a)), Greedy (Fig. 1 (b)), Leiden (Fig. 1 (c))) and Combo (Fig. 1 (d)) for each network and detected community. As it can be noted, the deception score increases as the budget increases in all cases (the same behavior has been observed also for the other detection algorithms not reported in the chart). From the results emerged that Leiden and Infomap present high variability in the community structure obtained by applying the community detection to the same network in different
384
V. Fionda
runs. This corresponds to a high variability of the deception score when the same set of network modifications is applied. Thus, to stay on the safe side we also suggest considering a subset of BHC, called Safe-BHC, that only includes WalkTrap, Greedy, Eigenvectors, Paris, and Combo that have proven to be the most stable. We then investigated the type of modifications belonging to the optimal set for each network, detection algorithm, and community. The cumulative results are reported, as a stacked column chart, in Fig. 2. For each detection algorithm, we summed the number of times an intra-community deletion, intra-community addition, and inter-community addition has been selected in the optimal set over the networks, communities, and budgets (in no cases an inter-community edge deletion was selected in the optimal set of editings). As it can be noted, the types of modifications leading to the optimal deception score highly depend on the underlying deception algorithm. Indeed, for example, while for Combo, Greedy, and Paris the great majority of modifications are inter-community edge additions, for Eigenvectors and Leiden the major part of modifications are intracommunity edge additions. This is an interesting result since the majority of deception algorithms today available do not consider intra-community edge additions to compute the best set of network editings, even if the target community is disconnected (e.g., [9,14,23]).
Fig. 2. Cumulative type of network editings for all detection algorithms over the networks and budgets in the BHC dataset.
Figure 3 reports the distribution of the type of network editings belonging to the optimal set for the WalkTrap detection algorithm, over all the networks and communities for budget 3. As it can be noted, apart from Zachary’s karate club for which the optimal network editings include intra-community deletions and inter-community additions, for all the other networks and communities, the
Better Hide Communities
385
optimal deception score is reached by applying intra-community additions and inter-community additions and for some networks and communities only intracommunities additions are suggested (e.g., for community 1 in the Barab´ asiAlbert network with 40 nodes).
Fig. 3. Types of network editings in the optimal set for the detection algorithm WalkTrap over all the networks in the BHC dataset and budget 3.
5
Conclusion and Future Work
The “Better Hide Communities” (BHC) benchmark dataset offers a challenging environment for evaluating the efficacy of deception algorithms. BHC serves as a driving force for researchers to refine algorithms designed to deceive community detection. An intriguing avenue for future research involves conducting a comparative analysis of existing deception algorithms on the BHC dataset. This endeavor would offer a comprehensive understanding of algorithmic performance, allowing researchers to identify trends, strengths, and weaknesses across different approaches. Such a comparative study could help establish best practices and guide the development of novel techniques that outperform existing solutions. Acknowledgments. This work was partially supported by MUR under PRIN project HypeKG Prot. 2022Y34XNM, CUP H53D23003710006; PNRR MUR project PE0000013-FAIR, Spoke 9 - WP9.2.
386
V. Fionda
References 1. Blondel, V.D., Guillaume, J.-L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. J. Stat. Mech.-Theory E 10, P10008 (2008) 2. Bonald, T., Charpentier, B., Galland, A., Hollocou, A.: Hierarchical graph clustering using node pair sampling. arXiv preprint arXiv:1806.01664 (2018) 3. Cazabet, R., Rossetti, G., Milli, L.: CDlib: a python library to extract, compare and evaluate communities from complex networks (extended abstract). In: Proceedings of MARAMI. CEUR-WS.org (2022) 4. Chakraborty, T., Srinivasan, S., Ganguly, N., Mukherjee, A., Bhowmick, S.: Permanence and community structure in complex networks. ACM TKDD 11(2), 1–34 (2016) 5. Clauset, A., Newman, M.E.J., Moore, C.: Finding community structure in very large networks. Phys. Rev. E 70(6) (2004) 6. Fionda, V., Palopoli, L., Panni, S., Rombo, S.E.: Protein-protein interaction network querying by a “focus and zoom” approach. In: BIRD, CCIS, vol. 13, pp. 331–346 (2008) 7. Fionda, V., Palopoli, L., Panni, S., Rombo, S.E.: A technique to search for functional similarities in protein-protein interaction networks. Int. J. Data Min. Bioinform. 3(4), 431–453 (2009) 8. Fionda, V., Gutierrez, C., Pirr` o, G.: Building knowledge maps of web graphs. Artif. Intell. 239, 143–167 (2016) 9. Fionda, V., Pirr` o, G.: Community deception or: how to stop fearing community detection algorithms. IEEE Trans. Knowl. Data Eng. 30(4), 660–673 (2018) 10. Fionda, V., Pirr` o, G.: Community deception in networks: where we are and where we should go. In: Benito, R.M., Cherifi, C., Cherifi, H., Moro, E., Rocha, L.M., Sales-Pardo, M. (eds.) Complex Networks & Their Applications X: Volume 2, Proceedings of the Tenth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2021, pp. 144–155. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-93413-2 13 11. Fionda, V., Madi, S.A., Pirr` o, G.: Community deception: from undirected to directed networks. Soc. Netw. Anal. Min. 12(1) (2022) 12. Fionda, V., Pirr` o, G.: Community deception in attributed networks. IEEE Trans. Comput. Soc. Syst. (2022) 13. Liu, Y., Liu, J., Zhang, Z., Zhu, L., Li, A.: REM: from structural entropy to community structure deception. Adv. Neural Inf. Process. Syst. 32 (2019) 14. Mittal, S., Sengupta, D., Chakraborty, T.: Hide and seek: outwitting community detection algorithms. IEEE Trans. Comput. Soc. Syst. 8(4), 799–808 (2021). https://doi.org/10.1109/TCSS.2021.3062711 15. Nagaraja, S.: The impact of unlinkability on adversarial community detection: effects and countermeasures. In: Atallah, M.J., Hopper, N.J. (eds.) Privacy Enhancing Technologies. LNCS, vol. 6205, pp. 253–272. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14527-8 15 16. Newman, M.E.J.: Modularity and community structure in networks. PNAS 103(23), 8577–8582 (2006) 17. Newman, M.E.J.: Finding community structure in networks using the eigenvectors of matrices. Phys. Rev. E 74(3) (2006) 18. Pons, P., Latapy, M.: Computing communities in large networks using random walks. J. Graph Algorithms Appl. 10(2), 191–218 (2006)
Better Hide Communities
387
19. Revelle, M., Domeniconi, C., Sweeney, M., Johri, A.: Finding community topics and membership in graphs. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Gama, J., Jorge, A., Soares, C. (eds.) ECML PKDD 2015. LNCS, vol. 9285, pp. 625–640. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23525-7 38 20. Rosvall, M., Bergstrom, C.T.: Maps of random walks on complex networks reveal community structure. Proc. Natl. Acad. Sci. U.S.A. 105(4), 1118–1123 (2008) 21. Sobolevsky, S., Campari, R., Belyi, A., Ratti, C.: General optimization technique for high-quality community detection in complex networks. Phys. Rev. E 90(1) (2014) 22. Traag, V.A., Waltman, L., van Eck, N.J.: From Louvain to Leiden: guaranteeing well-connected communities. Sci. Rep.9(1) (2019) 23. Waniek, M., Michalak, T.P., Wooldridge, M.J., Rahwan, T.: Hiding individuals and communities in a social network. Nat. Hum. Behav. 2(2), 139–147 (2018) 24. Yang, J., McAuley, J., Leskovec, J.: Community detection in networks with node attributes. In: ICMD, pp. 1151–1156 (2013) 25. Zachary, W.W.: An information flow model for conflict and fission in small groups. J. Anthropol. Res. 33, 452–473 (1977)
Crossbred Method: A New Method for Identifying Influential Spreaders from Directed Networks Nilanjana Saha1 , Amrita Namtirtha2(B) , and Animesh Dutta1(B) 1
National Institute of Technology, Durgapur, West Bengal, India [email protected], [email protected] 2 JIS College of Engineering, Kalyani, West Bengal, India [email protected]
Abstract. Influential spreaders are used to maximize or control the spreading dynamics in a network. It acts as a maximizer in the case of information dissemination and a controller to control the epidemic spreading. In the literature, researchers are mostly focused on finding the best spreader from an undirected network. Indeed, the edge’s direction of a spreading process in the network has immense significance while estimating the influential spreaders. This paper presents a novel method, i.e., the “crossbred method” to identify the best spreaders for a directed network. The proposed method considers the spreading properties of the directed network. It takes account of two popular parameters of a spreading process, i.e., node’s out-degree and spreading’s reachability of an originator node. We have verified the spreading performance of the proposed method with the Directed Susceptible-Infected-Recovered (SIR) spreading epidemic model on six real networks. The outcome of the investigation demonstrates that the proposed method achieved significant improvement in terms of spreading dynamics over the existing methods of directed networks such as out-degree centrality, betweenness centrality, pagerank centrality, eigenvector centrality, cluster-rank centrality, outgoing closeness centrality, hybrid centrality. Keywords: Directed Networks · Directed SIR epidemic model · Influential Spreaders · Spreading Dynamics · Kendall τ correlation
1
Introduction
Spreading is a universal process in many real-world fields, such as pandemics [24], information dissemination [25], and others [9]. Influential spreaders are vital players in controlling and maximizing the spreading process. Many centrality methods have been proposed in the literature to identify the influential spreaders [21,27]. However, determining the best spreaders in the network is still an open challenge because the spreading flow direction or path is unpredictable in each diffusion process. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 388–400, 2024. https://doi.org/10.1007/978-3-031-53503-1_32
Crossbred Method
389
Many researchers have proposed various centrality methods [4,10] for identifying influential spreaders for undirected networks where edges don’t have any spreading flow direction. Indeed, the literature says that the edge’s flow direction is associated with spreading. Considering the edge’s flow direction in the centrality or indexing methods is crucial to get more prominent influential spreaders [7]. We have illustrated the above statement using an example, Fig. 1. Figure 1 defines two types of networks: undirected and directed. Using this figure, we demonstrate the importance of the edge’s direction of a spreading process. If node d of undirected network Fig. 1 (a) acts as the originator of a spreading process, then the information can travel to the whole network, which is valid for all the nodes. From an undirected network, tracking the actual receiver of that information is difficult. If node d acts as the information originator for the directed network Fig. 1 (b), then the information’s spreading region will be limited to node e. Directed networks serve various purposes, including the measurement of social influence, tracking information propagation, and modeling disease transmission. In the literature, a few works have identified the influential spreaders on a directed network, such as in-degree centrality [14,28], out-degree centrality [14,28], eigenvector centrality [5,22], cluster-rank [7], pagerank centrality [23], closeness centrality [11], incoming and outgoing closeness centrality [26], betweenness centrality[29], m-ranking method [14], hybrid centrality [2] etc. Further, we have observed that the existing centrality of a directed network is designed to find central nodes of the network, and the definition of this central node is different for each centrality. A few methods are designed on the view of spreading dynamics. In this paper, we have proposed a new “Crossbred Method” for a directed network using the parameters of spreading dynamics. The proposed method is designed using a combination of two parameters: out-degree and spreading’s reachability of a node.
Fig. 1. Shows Undirected and Directed Network
390
N. Saha et al.
Out-degree means just nearest neighbor outbreaks or outgoing edges. The out-degree of a network is used to quantify the node’s connectivity at the direct neighbor level. At this level, epidemics spread more than at the other neighboring levels. Spreading epidemics will likely infect and spread the disease to its closest neighbors. In addition, the spreading can further reach many neighbors’ levels. To track that, we have considered another parameter in the proposed method, i.e., the node’s reachability of a network. The reachability of a node defines the reachable nodes from that source node. A node’s reachability contributes to understanding the flow of information or influence to its connected neighbors in a directed network. Our observation is that the reachability of a node with out-degree identifies the best influential spreaders from a directed network.
Fig. 2. In this figure Directed network measure of spreading ability using a Directed SIR epidemic model where infection probability β = 0.84 and epidemic threshold βth = 0.82. In the figure, the spreading capability of nodes is marked by a numeric value.
Using Fig. 2, we have demonstrated the importance of considering these spreading parameters to find influential spreaders. To do that, we have applied the Directed Susceptible-Infected-Recovered (SIR) epidemic model [15] on the schematic network Fig. 2 to measure the spreading capability of each node. We observe that c, b, e, d, and a are high-spreading capability nodes from the SIR epidemic model, and g, h, j, m, n are low-spreading capability nodes. This scenario is because nodes g, h, j, m, n do not have any outgoing edges or paths. To validate the performance of the proposed method with other centrality or indexing methods, we have used the Directed Susceptible-Infected-Recovered (SIR) epidemic model [15], six real directed networks, and seven comparative methods. These six real networks are Wikivote [2,8,26], Air Traffic Control [1], Wikipedia link (crh) [1], Flimtrust Trust [1], DBLP [2], and DNC emails [1]. The experimental result demonstrates that the proposed crossbred method CM performs better than the existing methods such as out-degree centrality (OD) [14,28], pagerank centrality (P R) [23], cluster-rank centrality (C) [7], betweenness centrality (BC) [29], eigenvector centrality (EC) [5,22], outgoing closeness
Crossbred Method
391
method (OC) [26], and hybrid method (HC) [2], for all used directed networks and different infection probability β (about β see Sect. 5).
2
Related Work
Since the last century, various works have been proposed to find influential spreaders from an undirected network. However, a few works in the literature are focused on finding influential spreaders from a directed network. In the following, we discuss some states of indexing methods for directed networks. Brodka et al. [6] proposed out-degree centrality, which is measured by counting the number of outgoing edges of a node in a directed network (see the Eq. 1). White et al. [29] have proposed betweenness centrality on directed network. In this centrality, it measures the node’s influence based on how many times that node act as a bridge node in the shortest path of the other two nodes. Lawrence et al.[23] proposed pagerank centrality on a directed network. PageRank centrality measures the importance of a web page by counting the number and quality of links attached to a page. Recently, this method has been used in the literature to find influential spreaders. L¨ u et al. [16] proposed leader rank is a simple variant of pagerank in directed networks. Bonacich et al. [5] and Newman et al. [22] proposed eigenvector centrality on directed networks. A high eigenvector centrality score indicates a strong influence over other nodes in the network. Chen et al. [7] have proposed a cluster rank method. It is a local indexing method. A cluster can measure the number of nearest neighbors and the number of interactions between the neighbors [7]. The cluster rank measures the influence of a node and is calculated by taking into account of cluster coefficient (labeled as f (ci )) and just the nearest neighbors(Ni ) out-degree information of that node. Closeness Centrality [26] of a node is defined as the reciprocal of a sum of the shortest path (d) distance to all other nodes in a network. The author proposed incoming and outgoing closeness centrality. In this article, we use outgoing closeness centrality as a competitive method. Ahajjam et al.[2] proposed a hybrid method for directed network using improved coreness centrality and eigenvector centrality. Kumar et al.[14] proposed an m-ranking method for weighted directed networks using out-degree and weighted out-degree of node. Further, a few methods are proposed in the literature to measure the nodes’ influence using a graph neural network on the directed network. Such as Maurya et al. [17,18] present a GNN-based inductive framework for calculating betweenness centrality in a directed network. The approach uses constrained message passing of node features to approximate centrality, outperforming current techniques and taking less time.
392
N. Saha et al.
From the above literature, we have found that a few centrality methods are designed based on the concept of the directed network to measure the influential spreader. Most of the above centrality is designed based on a different philosophy. Further, we have found that limited works are designed based on spreading properties. In this article, we have developed an indexing method based on spreading behavior. The proposed crossbred method considers the direction of the network, just outbreaks of the spreading process, and the maximum probable spreading reachability of a node in the network. The description of the method is discussed below.
3
Methodology
This section discusses the proposed “crossbred method” (CM ). Crossbred Method considers two parameters out-degree and reachability of a node. Before discussing the proposed method, we provide the general graph definition and discuss the out-degree and reachability of a node as follows. Graph Definition: We work with a directed unweighted network G =< V, E > where V is the number of vertices and E is the number of out-edges. A network can be represented as an adjacent matrix A = aij ∈ RV ×V where aij = 1 if an out-edge is present between the node i to node j, otherwise 0. Out-Degree: The out-degree of a node is a reliable parameter to measure the influential node [6,19]. Moreover, the out-degree of a node plays a vital role in spreading. The spreading outbreaks of just nearest neighbors have a high probability of infecting their neighbors and transmitting further. The out-degree (labeled as OD) of a node v is defined by OD(v) =
v
avu
(1)
u=1
where, node u is neighbors of node v, avu = 1 if a out-edge present between node v to node u. Reachability of a Node (RP): Reachability is a popular concept in graph theory [3]. In this study, we have used this concept to design the proposed method, which measures the number of reachable nodes from the source node. If the path exists between source node i to node j, then j is a reachable node of node i. In a connected, undirected network, all the nodes have an equal number of reachable nodes because every pair of nodes can connect. Therefore, it is straightforward to calculate the reachability of undirected networks. However, for a directed network, reachability has immense importance in measuring the node’s influence. In the case of spreading dynamics, the flow of information or disease transmission reaches only the reachable nodes from the originator. Therefore, the influence of the node depends on the node’s reachability. Each node’s reachability for the directed network is calculated using a breadth-first search traversal algorithm. We have used the descendant method under the Python Networkx package to
Crossbred Method
393
implement the node’s reachability. In Fig. 1(b), the reachability of a node a is 4. The reachable paths from node a are a → b, a → b → c, a → b → d, a → b → e. Similarly, nodes b = 3, c = 2, d = 1, and e have no reachability to other nodes. Proposed Crossbred Method (CM): The node’s out-degree OD and reachability (RP ) are combined to construct the suggested crossbred method CM . The crossbred method for a node v is calculated as CM [v] = OD[v] × RP [v]
(2)
Algorithm 1. Calculate Rank for Crossbred Centrality(CM) Require: G = (V, E) Ensure: Rank[v, CM (v)] 1: OD ← {}; 2: for v ∈ G do 3: OD[v] ← count outdegree each node 4: end for 5: RP ← {}; 6: dictionary ← {}; 7: for v ∈ G do 8: RP [v] ← reachability of a node 9: dictionary[v] ← length of RP 10: end for 11: CM ← {} 12: for v ∈ G do 13: CM [v] ← OD[v] × RP [v] 14: end for 15: rank ← {} 16: for v ∈ G do 17: rank[v] ← CM [v] 18: end for
A further detailed description of the proposed crossbred method is discussed in Algorithm 1. Algorithm 1 calculates the rank of each node of a directed network G < V, E >. It takes a G < V, E > as an input and returns a list Rank[v, CM (v)] as an output that holds ranks of the nodes based on the crossbred method. Lines 1–4 calculate the out-degree of the node and store the value in OD[] dictionary. Lines 5–10 calculate the number of reachable nodes of a source node and keep the value in RP [] dictionary. Finally, lines 12–14 combine the OD and RP values of a node v and store it in the dictionary CM [v]. Lines 16-18 rank the node’s position based on CM value and keep the value in Rank[v, CM [v]] dictionary.
394
N. Saha et al.
Running Example: We run the above algorithm on a directed network (Fig. 2) to present a performance glimpse of the proposed method CM . Table 1 provides two ranked lists from the CM method and the Directed SIR epidemic model. The proposed CM identifies the nodes {a,b,c,d,e} as top nodes. We run the Directed SIR epidemic model on that directed network to measure the spreading efficiency of the nodes. Table 1 shows that the identified top nodes {a,b,c,d,e} from the proposed CM are the most influential spreaders according to the Directed SIR model. Table 1. Describe the ranking result of SIR and CM for directed network Fig. 2 Where β = 0.84. The β and βth have been decided as per the description of the SIR model in Sect. 1.c {R1 , R2 ,...,R|v| } represent the rank of nodes. Note the first few nodes are the best spreaders in this directed network Network Method R1 Figure 2 SIR CM
R2 R3 R4 R5 R6 R7 R8
c b c, d e
e a
d b
a k
k f
R9
f i, l g, h, j, m, n i, l g, h, j, m, n –
Complexity Analysis: The proposed crossbreed method considers two parameters: the out-degree and reachability of a node. It takes O(V + E) to calculate the out-degree of the nodes. We have used a breadth-first search (BFS) algorithm for counting the number of reachable nodes of each node. It takes O(V 2 ) to run the breadth-first search (BFS). Therefore, The proposed crossbred method runs in polynomial time.
4
Experimental Setup
We have set experimental setups to evaluate the proposed method’s spreading performance with others. The details of the experimental design are discussed below. Table 2. Describes network structural features where |V | is the total number of vertices, and |E| is the total number of edges in a network, βth represents a network’s epidemic threshold, β indicates the infection probability used for the experiment, λ1 is the largest eigenvalue of the adjacency matrix, < k > is the average out-degree for directed network. Network
|V |
|E|
βth
β
λ
Network Type
Wikivote
7115
100762 0.022 0.031 45.1446
14.5733 Connected
Air Traffic Control
1226
2410
2.1330
0.191 0.28
5.2483
Connected
Wikipedia links (crh) 8286
104234 0.008 0.1
Flimtrust Trust
874
1309
0.086 0.176 11.6680
2.1201
DBLP
12590 49651
0.214 0.304 4.6768
3.9523
Connected
DNC emails
1891
0.038 0.128 26.0080
2.9603
Connected
4465
130.0131 21.1767 Connected Connected
Crossbred Method
4.1
395
Datasets
We evaluated the centrality’s spreading performance in the experiments on the six real networks. These are (1) Wikivote:[2], (2) Air Traffic Control:[1], (3)Wikipedia link (crh):[1], (4)Flimtrust Trust:[1], (5) DBLP:[2], and (6) DNC emails:[1]. The detailed description of these networks is shown in Table 2. Exchanges between nodes, representing the sender-receiver relationship. 4.2
Benchmark Simulator
We have used the benchmark simulator Directed Susceptible-Infected-Recovered (SIR) spreading epidemic [15] model to test the proposed crossbred method’s spreading effectiveness. Each node in the network is involved in three states: susceptible, infected, or recovered. All the nodes are initially in the susceptible state, and only one node is in the infected state, and that node is treated as an originator of that diffusion process. The infected node can transmit the disease to its susceptible neighbors with the infection probability β. The infected nodes can recover with recovered probability γ. In our experiment, we set this value γ = 1. The β value is decided based on the network’s epidemic threshold βth where β βth . A network’s βth is calculated by 1/λ1 for the directed networks [15], where λ1 is the largest eigenvalue of the adjacency matrix, also known as the spectral radius. The values of βth and β for the networks mentioned above are shown in Table 2. In this Directed SIR model, the spreading process iterates 100 times for all the networks in our experiment. 4.3
Kendall τ Rank Correlation Coefficient
In the experiment, the Kendall τ rank correlation coefficient [12,13] assesses the degree of rank correlation between two ranking lists: the indexing method or centrality and the directed SIR epidemic model. This coefficient quantifies the similarity or dissimilarity between the two lists. The Kendall τ coefficient ranges between [−1, 1]. A high τ value indicates a strong similarity between the ranking lists, while a low τ value suggests significant dissimilarity.
5
Experimental Result
The two experiments we run to check and validate the effectiveness of the suggested method CM are reported in this section. Experiment 1 evaluates the spreading performance of the CM method with respect to the SIR model. Experiment 2 shows the spreading improvement percentage of the proposed method over other contemporary indexing methods. Experiment 1: Kendall τ Correlation Between Centrality and Directed SIR Epidemic Model. To examine the effectiveness of the proposed crossbred method with other existing methods, we have used benchmark simulation directed SIR spreading epidemic model and Kendall τ rank correlation coefficient. In this
396
N. Saha et al.
work, we have compared the performance of the proposed crossbred method (CM ) with a few popular competitive methods on the mentioned six real directed networks (see Table 3). The competitive methods are (1) out-degree centrality (OD) [14], (2) pagerank centrality (P R) [8,31], (3) cluster-rank centrality (C), [7] (4) betweenness centrality (BC) [10,29], (5) eigenvector centrality (EC) [5,22] (6) outgoing closeness method (OC) [26], (7) hybrid method (HC) [2,4] etc. We have implemented all these competitive methods using python language for empirical evaluation. Figure 3 shows the Kendall τ rank correlation between the ranked lists of the indexing method and the SIR model. The rank correlation τ is shown for different infection probability β, which is passed as an input parameter in the directed SIR model. The effectiveness of an indexing method will consider when β > βth and τ value is higher than others (Table 2). The experimental result shows that proposed CM method performed better than others OD, BC, P R, C, EC, OC, and HC in terms of correlation with SIR when β > βth . For OD and HC methods, we have observed a good correlation with SIR. The spreading performance of C and P R is relatively poor for all the networks. The comparison result is also tabulated in Table 3. The table shows the average of τ value (labeled as σ)[20] where β ≥ βth . The average of τ of an indexing method σ(τI ) is defined as σ(τI ) =
1 n
βmax =βth +n+δ
τ(I)
(3)
βmin =βth +δ
where, n = 10 incremental steps of β value, δ = 0.02. The table shows that the average τ correlation σ(τI ) of the proposed CM method with Directed SIR is higher than the other existing indexing methods.
Fig. 3. Shows Kendall τ rank correlation between the directed SIR epidemic models and indexing methods for the different β values.
Crossbred Method
397
Table 3. Describes the average Kendall τ correlation σ(τI ) of real networks Network
σ(τOD ) σ(τBC ) σ(τP R ) σ(τEV ) σ(τC )
σ(τOC ) σ(τHC ) σ(τCM )
Wikivote
0.7351
0.1562
0.3059
0.3834
0.1865 0.4824
0.6885
0.7613
Air Traffic Control
0.5623
0.4978
0.2685
0.4557
0.2634 0.3239
0.5762
0.7158
Wikipedia link (crh) 0.8075
0.4559
0.2488
0.6108
0.5643 0.3318
0.8280
0.8660
Flimtrust Trust
0.7507
0.5140
0.2598
0.4538
0.3891 0.3996
0.8216
0.8535
DBLP
0.9629
0.7992
0.5180
0.4063
0.5503 0.8992
0.9381
0.9869
DNC emails
0.8129
0.4567
0.2323
0.3391
0.3498 0.7513
0.8408
0.8688
Experiment 2: Spreading Improvement Percentage of the Proposed Method Over Other Methods. To investigate the enhancement attained with the proposed method (CM ) over the extant indexing methods, we have calculated the improvement percentage η(%) [19,27] function, which is defined as η(%) =
⎧τ c(IP ) −τ(IP ) ⎪ ⎪ ∗ 100, ⎪ τ(IP ) ⎨ τc(IP ) −τ(IP ) ⎪ −τ(IP ) ⎪
⎪ ⎩ 0,
∗ 100,
τ(IP ) > 0 τ(IP ) < 0
(4)
τ(IP ) = 0
τc(IP ) denotes the Kendall τ correlation between the proposed CM method and SIR model and τ(IP ) denotes the Kendall τ correlation between the other comparative method and SIR model. The η% denotes spreading improvement percentage of proposed method CM over the indexing method such as OD, BC, P R, EV , C, OC, and HC. For η = 0 indicates no improvement, η > 0 indicates an improvement, and η < 0 indicates the spreading performance is
Fig. 4. Shows the spreading improvement percentage η(%) of the proposed method CM over other mentioned methods for all the datasets listed in Table 2.
398
N. Saha et al.
poor for CM over other methods. Figure 4 displays improvement percentages η(%) for all real networks. From the figure, we have observed an improvement percentage η(%) > 0 of the proposed method CM for all the networks and all the mentioned methods. The η(%) for P R are relatively higher for Air Traffic control, Wikipedia link (crh), Flimtrust Trust, and DNC emails. On the other hand, for Wikivote network η% is relatively higher for the B and η% for DBLP is relatively higher for the EV , where β ≥ βth .
6
Conclusion
In this paper, we have proposed a crossbred method to measure the spreading efficiency of nodes in a directed network. The proposed method considers the node’s out-degree and reachability of a node. From the experimental results, we have found that the proposed method surpasses other mentioned existing methods. We investigated the performance of the proposed method with a classical SIR epidemic model only, which is used to replicate the spreading dynamics. However, due to the diversity of network architectures and spreading dynamics, it is extremely difficult to evaluate genuine spreading dynamics with different types of networks. In the future, we will compare the proposed method with different spreading dynamics [14,19,30] models. In addition, we will apply the proposed method to large networks and varieties of networks such as social networks, communication networks, citation networks, and more to confirm the effectiveness of the proposed method. Acknowledgment. The Visvesvaraya Ph.D. scheme, MeitY, Govt. of India, provided funding for this research project.
References 1. Directed network. http://konect.cc/networks/ 2. Ahajjam, S., Badir, H.: Identification of influential spreaders in complex networks using hybridrank algorithm. Sci. Rep. 8(1), 1–10 (2018) 3. Alemany, J., Del Val, E., Alberola, J.M., Gar´cia-Fornes, A.: Metrics for privacy assessment when sharing information in online social networks. IEEE Access 7, 143631–143645 (2019) 4. Bhat, N., Aggarwal, N., Kumar, S.: Identification of influential spreaders in social networks using improved hybrid rank method. Procedia Comput. Sci. 171, 662–671 (2020) 5. Bonacich, P.: Power and centrality: a family of measures. Am. J. Sociol. 92(5), 1170–1182 (1987) 6. Brodka, P., Musial, K., Kazienko, P.: A performance of centrality calculation in social networks. In: 2009 International Conference on Computational Aspects of Social Networks, pp. 24–31. IEEE (2009) 7. Chen, D.B., Gao, H., L¨ u, L., Zhou, T.: Identifying influential nodes in large-scale directed networks: the role of clustering. PLoS ONE 8(10), e77455 (2013)
Crossbred Method
399
8. Chen, G., Xu, C., Wang, J., Feng, J., Feng, J.: Nonnegative matrix factorization for link prediction in directed complex networks using pagerank and asymmetric link clustering information. Expert Syst. Appl. 113290 (2020) 9. Dorogovtsev, S.N., Goltsev, A.V., Mendes, J.F.: Critical phenomena in complex networks. Rev. Mod. Phys. 80(4), 1275 (2008) 10. Freeman, L.C.: A set of measures of centrality based on betweenness. Sociometry 35–41 (1977) 11. Freeman, L.C., et al.: Centrality in social networks: conceptual clarification. In: Social Network: Critical Concepts in Sociology, pp. 238–263. Routledge, London (2002) 12. Kendall, M.G.: The treatment of ties in ranking problems (1945) 13. Knight, W.R.: A computer method for calculating Kendall’s tau with ungrouped data. J. Am. Stat. Assoc. 61(314), 436–439 (1966) 14. Kumar, R., Manuel, S.: A centrality measure for directed networks: m-ranking method. In: Social Networks and Surveillance for Society, pp. 115–128 (2019) 15. Li, C., Wang, H., Van Mieghem, P.: Epidemic threshold in directed networks. Phys. Rev. E 88(6), 062802 (2013) 16. L¨ u, L., Zhang, Y.C., Yeung, C.H., Zhou, T.: Leaders in social networks, the delicious case. PLoS ONE 6(6), e21202 (2011) 17. Maurya, S.K., Liu, X., Murata, T.: Fast approximations of betweenness centrality with graph neural networks. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 2149–2152 (2019) 18. Maurya, S.K., Liu, X., Murata, T.: Graph neural networks for fast node ranking approximation. ACM Trans. Knowl. Disc. Data 15, 1–32 (2021) 19. Namtirtha, A., Dutta, A., Dutta, B.: Identifying influential spreaders in complex networks based on kshell hybrid method. Phys. A 499, 310–324 (2018) 20. Namtirtha, A., Dutta, A., Dutta, B.: Weighted kshell degree neighborhood method: an approach independent of completeness of global network structure for identifying the influential spreaders. In: 2018 10th International Conference on Communication Systems & Networks (COMSNETS), pp. 81–88. IEEE (2018) 21. Namtirtha, A., Dutta, A., Dutta, B., Sundararajan, A., Simmhan, Y.: Best influential spreaders identification using network global structural properties. Sci. Rep. 11(1), 1–15 (2021) 22. Newman, M.: Networks: An Introduction. Oxford University Press, New York (2010) 23. Page, L., Brin, S., Motwani, R., Winograd, T.: The pagerank citation ranking: Bringing order to the web. Technical report, Stanford infolab (1999) 24. Pastor-Satorras, R., Vespignani, A.: Epidemic spreading in scale-free networks. Phys. Rev. Lett. 86(14), 3200 (2001) 25. Pei, S., Morone, F., Makse, H.A.: Theories for influencer identification in complex networks. In: Complex Spreading Phenomena in Social Systems: Influence and Contagion in Real-World Social Networks, pp. 125–148 (2018) 26. Putman, K., Boekhout, H.D., Takes, F.W.: Fast incremental computation of harmonic closeness centrality in directed weighted networks. In: Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 1018–1025 (2019) 27. Wang, J., Hou, X., Li, K., Ding, Y.: A novel weight neighborhood centrality algorithm for identifying influential spreaders in complex networks. Phys. A 475, 88– 105 (2017)
400
N. Saha et al.
28. Wang, J., Mo, H., Wang, F., Jin, F.: Exploring the network structure and nodal centrality of china’s air transport network: a complex network approach. J. Transp. Geogr. 19(4), 712–721 (2011) 29. White, D.R., Borgatti, S.P.: Betweenness centrality measures for directed graphs. Soc. Netw. 16(4), 335–346 (1994) 30. Yeruva, S., Devi, T., Reddy, Y.S.: Selection of influential spreaders in complex networks using pareto shell decomposition. Phys. A 452, 133–144 (2016) 31. Zhang, P., Wang, T., Yan, J.: Pagerank centrality and algorithms for weighted, directed networks. Physica A Stat. Mech. Appl. 586, 126438 (2022)
Examining Toxicity’s Impact on Reddit Conversations Niloofar Yousefi, Nahiyan Bin Noor(B) , Billy Spann, and Nitin Agarwal COSMOS Research Center, University of Arkansas, Little Rock, AR 72204, USA {nyousefi,nbnoor,bxspann,nxagarwal}@ualr.edu
Abstract. Amidst the growth of harmful content on social media platforms, encompassing abusive language, disrespect, and hate speech, efforts to tackle this issue persist. However, effectively preventing the impact of such content on individuals and communities remains a challenging endeavor. In this paper, we present a study using Reddit data, where we employ a tree structure to visually and comprehensively examine the impact of toxic content on communities. By applying various machine learning algorithms, we classify the toxicity of each leaf node based on its parent and grandparent nodes, as well as the overall tree’s average toxicity. Our methodology can help policymakers detect early warning signs of toxicity and redirect potentially harmful comments to less toxic directions. Our research provides a comprehensive analysis of toxicity on social media platforms, allowing for a better understanding of differences and similarities across platforms, and a deeper exploration of the impact of toxic content on individual communities. Our findings provide valuable perspectives on the prevalence and consequences of toxic content on social media platforms, and our approach can be used in future studies to provide a more nuanced understanding of this complex issue.
Keywords: Toxicity Visualization Tree Reddit · Machine Learning
1
· Toxicity Prediction Mode ·
Introduction
Social media platforms have experienced significant growth in recent years, enabling individuals to communicate efficiently and easily across the world. However, social media platforms have also witnessed an alarming increase in harmful content, such as hate speech, abusive language, and toxic comments. Social media toxicity is defined as the use of rude or disrespectful language, including threats, insults, and identity-based hate. It also encompasses harassment and socially disruptive persuasion like misinformation, radicalization, and gender-based violence [1,2]. The increase of toxic content on social media platforms represents a significant threat to both individuals and communities in general. In this study, we focus on Reddit, one of the most popular social media platforms, to investigate c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 401–411, 2024. https://doi.org/10.1007/978-3-031-53503-1_33
402
N. Yousefi et al.
the prevalence and impact of toxic content on this platform. Even though Reddit policymakers have imposed some rules and regulations that remove harmful content to prevent toxicity from their social media, yet they are still not able to remove all toxic posts, specifically the root of toxic comments. NB. Noor [3] showed a comment thread in Reddit where Reddit authority removed one toxic comment; however, they could not detect the previous toxic comment and the root of that comment. Thus our research questions revolve around three main themes, which are the following: RQ1: How often do Reddit conversations end with toxic comments? RQ2: Are toxic Reddit conversations characterized by wider, deeper, and larger branches compared to non-toxic conversations? RQ3: Can we predict the toxicity of the final comment in a Reddit conversation by looking at the toxicity of the previous two comments and the overall toxicity of the conversation? To address these research questions, we employ an approach utilizing a tree structure to visually and comprehensively examine the impact of toxic content on Reddit communities. Our methodology enables policymakers to detect early warning signs of toxicity and redirect potentially harmful comments in less toxic directions, thereby mitigating the impact of toxic content on Reddit communities. Our study provides a comprehensive analysis of toxicity on social media platforms and allows for a deeper exploration of the impact of toxic content on individual communities. The findings of our study offer valuable insights into the prevalence and impact of toxic content on social media platforms, and our approach can be used in future studies to provide a more nuanced understanding of this complex issue.
2
Literature Review
Several studies have so far been done to classify toxic and non-toxic comments. To distinguish between toxic comments and non-toxic comments, Sahana et al. [4] and Taleb et al. [5] devised methods for binary categorization of toxicity. In another study, Kumar et al. [6] recommended a variety of machine-learning techniques to categorize toxic comments. Noor et al. [7] and DiCicco et al. [8] aim to detect toxicity and binary classify them, to evaluate the level of toxicity present in discussions related to COVID-19 different social media platforms. A study by Saveski et al. [9] conducted a study on Twitter comments and found that toxic conversations had larger and more complex conversation trees compared to non-toxic ones. They developed a method to predict the toxicity of the next reply in a conversation with up to 70.5% accuracy, providing insights into conversation toxicity and the likelihood of a user posting a toxic reply. Coletto et al. [10] addressed the challenge of identifying controversial topics in social media by using network motifs and local patterns of user interaction. Their approach achieved 85% accuracy in predicting controversy, outperforming other structural, propagation-based, and temporal network features by 7%. In another study, Backstrom et al. [11] highlighted the importance of algorithmic curation for online conversations, specifically focusing on discussion threads. They proposed and evaluated different approaches leveraging network structure and participant behavior, demonstrating that learning-based methods that
Examining Toxicity’s Impact on Reddit Conversations
403
utilize this information enhance performance in predicting thread length and user re-entry. Hessel et al. [12] concentrated on predicting controversial posts in Reddit communities by examining textual content and tree structure of initial comments. They found that incorporating discussion features enhances predictive accuracy, particularly with limited comments, and noted that conversation structure features tend to have better generalization across communities than conversation content features. Additionally, Rajadesingan et al. [13] investigated the maintenance of stable norms in political subreddits on Reddit. They identified self-selection, pre-entry learning, post-entry learning, and retention as key processes, with pre-entry learning being the main factor contributing to norm stability. Newcomers adjusted their behavior to match the toxicity level of the subreddit they joined, but these adjustments were specific to the community and did not lead to transformative changes across political subreddits. Zhang et al. [14] introduce a computational framework for analyzing public discussions on Facebook. It captures social patterns and predicts discussion trajectories, highlighting participant tendencies over content triggers.
3
Methodology
In this section, we will outline the methodology adopted in this paper. 3.1
Data Collection
In this study, the data was collected from the Reddit platform, specifically from the “r/Nonewnormal” subreddit, which is known for its anti-vaccine stance. The Pushshift API was utilized to retrieve all posts and comments from a database where the information is stored. The data collection process was implemented using the Python PSAW library, as described in its documentation. The dataset consisted of 2.2 million posts and comments collected from June 2020 to August 2021. The data for conducting this study were obtained from a subreddit that was banned in September 2021 due to its dissemination of misinformation and toxic content related to masks and vaccines [15,16]. 3.2
Data Cleaning and Pre-processing
The collected datasets underwent a cleaning process to eliminate non-intended language. Afterward, the posts and comments datasets were merged, excluding removed or deleted posts and comments along with their respective columns. Additionally, columns containing spam or bot-generated comments were removed. For this study, posts or comments without parent or child nodes were excluded. These steps were taken to guarantee that the datasets used for subsequent analyses were of high quality and dependable. As a result of these measures, the dataset was refined, and it contained 1.2 million entries after the cleaning and preprocessing steps.
404
3.3
N. Yousefi et al.
Toxicity Detection
Detoxify [17], developed by Unitary A.I., is a machine-learning model utilized in this research. It employs a Convolutional Neural Network (CNN) architecture trained with word vectors to classify text as toxic or non-toxic. The Detoxify API, accessible to developers and researchers, provides a probability score between 0 and 1 for each input text, indicating the likelihood of being toxic, with closer to 0 being more nontoxic and closer to 1 being more toxic. The cleaned dataset was analyzed using the Detoxify model, generating toxicity scores across seven categories: Toxicity, Severe Toxicity, Obscene, Threat, Insult, Identity Attack, and Sexually Explicit. These categories provide insights into different levels and types of toxic content in online discussions. A threshold of 0.5 was employed to determine which texts were considered toxic, following a similar approach by Saveski et al. [9] The authors chose a threshold of 0.531 for the toxicity score because it struck a balance between precision (correctly identifying toxic tweets) and recall (identifying all truly toxic tweets) during experiments on a development set. This threshold performed well on the test set, achieving an acceptable classification accuracy, AUC, and F1 score, making it a suitable choice for identifying toxic tweets in their analysis. Therefore, in this study, texts with a toxicity score higher than 0.5 were labeled as toxic, while those with a score of 0.5 or lower were considered non-toxic. By utilizing the Detoxify model to analyze the dataset, the analysis provided valuable insights into the prevalence and characteristics of toxic text in online discussions. It is worth noting that previous studies have also utilized Detoxify and similar tools to calculate toxicity scores for online discussions. Comparing these scores across different research can help identify trends and patterns in the use of toxic language online, thereby informing the development of strategies and tools aimed at fostering more civil and respectful interactions on the internet. 3.4
Conversation Tree Generation
This study focused on the “Nonewnormal” subreddit and collected 23,000 conversation trees from this specific subreddit. To investigate the range of toxicity levels in the dataset, the average toxicity score was calculated for each conversation tree, and they were divided into five categories. A subset of 100 trees was selected from each category for further analysis, resulting in a total of 500 trees. The distribution of these trees across the five categories is illustrated in Fig. 1: non-toxic, less toxic, moderately toxic, toxic, and most commented. This approach allowed for a more detailed examination of toxic conversations, taking into account the patterns and dynamics observed across different levels of toxicity within the dataset. The selection of the fifth category was based on user engagement, as varying degrees of toxic comments are prevalent within the Reddit community and are often considered a common form of discourse. We ignored the fourth category of the nontoxic tree as all of the comments and replies of this category were nontoxic. Figure. 1 depicts the clear distribution of all tree categories.
Examining Toxicity’s Impact on Reddit Conversations
405
Fig. 1. Conversation tree generation and category distribution.
3.5
Conversation Tree Visualization
To visualize the conversation trees from Reddit, we utilized the Tulip visualization tool. Toxicity was determined based on comments with a toxicity score exceeding 0.5. In the visualization, red and blue colors were used to distinguish toxic and non-toxic comments, respectively. An example of a conversation tree from category 5, which had the highest comment count, is displayed in Fig. 2. This particular tree comprised 528 comments, with 109 of them classified as toxic. It ranked 54th among the top 100 trees in that category based on its size.
Fig. 2. A conversation tree visualization from most commented tree category (54th)
4
Results and Findings
This section presents the results and findings of this paper by answering the research questions mentioned in the introduction section.
406
4.1
N. Yousefi et al.
Conversations End with Toxicity in Different Categories of Tree
The study’s findings reveal that toxic conversations in online communities tend to diminish over time due to their inherently toxic nature. While these conversations initially attract participants, they eventually discourage further engagement. This pattern is particularly noticeable in communities with a significant presence of toxic behavior, such as the anti-vaccine community. To explore this phenomenon, the study analyzed conversation trees, and the results are presented in Fig. 3, with blue bar graphs indicating nontoxic trees and orange bar graphs indicating toxic trees. The analysis revealed that more than 50% of toxic conversations concluded with toxicity without any further response or engagement (Fig. 3 - second plot). In moderate toxic conversations (Figure 3 third plot), 42% still ended with toxicity, while in less toxic conversations (Fig. 3 - fourth plot), 27.4% of toxic posts were still concluded by the user. Conversely, the conversation trees with the highest number of comments (Fig. 3 first plot) predominantly exhibited non-toxic characteristics. The toxicity scores in these conversations ranged from 0.09 to 0.31, indicating that they were primarily focused on general discussions. In the non-toxic conversations category, there were no instances of toxic comments. These posts mainly consisted of inquiries or sharing personal experiences. These findings suggest that toxicity has a negative impact on the sustainability of conversations in online communities. As toxicity increases, users are more inclined to disengage, leading to premature termination of discussions. Therefore, community managers and moderators should be attentive to toxic behavior and implement appropriate measures to mitigate it. This approach is vital for fostering healthy and sustainable online conversations.
Fig. 3. Conversations that end with toxicity for each tree category
4.2
Structure of Toxic Conversations: Wider, Deeper, and Larger Branches
In this study, an analysis was conducted on a conversation tree belonging to the most commented tree category. This particular conversation tree was ranked
Examining Toxicity’s Impact on Reddit Conversations
407
Fig. 4. Class distribution and accuracy of the prediction with different machine learning models for toxic tree
76 based on the number of comments and consisted of 495 comments, with 105 comments identified as toxic (highlighted by red edges in Fig. 4). The average toxicity score for this conversation tree was determined to be 0.21. Visual analysis of the conversation tree revealed that toxic conversations tend to exhibit wider, larger, and deeper branches. The red edges are shown with black arrows in Fig. 4 demonstrating that each toxic comment tends to generate more toxic comments, resulting in an expansion of the conversation both in width and depth (circled in red in Fig. 4). Each branch within the conversation tree typically contains more nodes than the previous one, contributing to the overall size of the graph. Users tend to engage more with comments that receive numerous replies, which further contributes to the development of toxic conversations. This study offers visual representations of different conversation trees categorized by their toxicity levels and other attributes in order to address the research question of whether toxic Reddit conversations are characterized by wider, deeper, and larger trees. The selection of each tree was determined by specific criteria, such as comment quantity or toxicity level. Structural characteristics like the size, depth, and width of the trees and the occurrence of toxic comments within them were thoroughly examined. By using these visualizations, a clear depiction illustrates the progression of toxic conversations and their distinctions from non-toxic conversations. These findings can guide future research on online conversations and provide insights on how to encourage more positive and constructive interactions. 4.3
Predicting the Toxicity of the Last Message in a Reddit Conversation
In this section, we conducted a predictive analysis aimed at forecasting the toxicity level of the final reply within a conversation. The key factors taken into account for this prediction were the toxicity scores associated with the two pre-
408
N. Yousefi et al.
ceding replies in the conversation and the overall toxicity score encompassing the entire conversation. To perform the analysis, 100 trees from all categories were selected, excluding the nontoxic category. Specifically, all leaf nodes with two initial parents were extracted. By utilizing this dataset of conversation threads and their corresponding toxicity scores, we were able to implement and evaluate a predictive model. This model aimed to predict the level of toxicity in the final reply of a conversation, based on the toxicity scores of the previous two replies and on the overall toxicity score of the conversation. Figure 5 illustrates the class distribution of all four tree categories. To comprehensively assess the effectiveness of various machine learning algorithms in predicting specific outcomes, we conducted an analysis involving methods such as Logistic Regression, Support Vector Machine, and others. Our findings, detailed in Table 1, provide an overview of the evaluation metrics associated with each algorithm’s performance across different categories. In Fig. 5, we specifically highlight the results related to toxic trees, showcasing the most favorable class distribution achieved with the Adaboost classifier, which resulted in a 73% accuracy rate. However, it’s important to recognize that even though we attained the highest accuracy for the category associated with the most commented-on trees, a noteworthy challenge remains: an imbalanced distribution within the class. In Figure 6 the confusion matrix is shown for each category of trees.
Fig. 5. Class distribution of all tree categories. First (most commented), second (most toxic), third (moderate toxic), and fourth (less toxic) plots
Table 1. Result of Toxicity Prediction for Each Tree Category Tree Category
Accuracy Precision Recall F1
Algorithm
Most Commented 0.82
0.86
0.82
0.75
Logistic Regression
Most Toxic
0.73
0.74
0.73
0.75
Adaboost Classifier
Moderate Toxic
0.59
0.59
0.59
0.59
Gradient Boosting
Less Toxic
0.70
0.66
0.70
0.65
Adaboost Classifier
Non Toxic
None
None
None
None None
Examining Toxicity’s Impact on Reddit Conversations
409
The results of our analysis suggest that it is feasible to predict the toxicity level of the final reply in a conversation with a reasonable degree of accuracy. This prediction is based on considering the toxicity scores of the two preceding replies as well as the overall toxicity score of the entire conversation. This predictive approach has the potential to be a valuable tool in identifying and managing toxic discussions. By detecting toxicity within a comment thread, policymakers and platform administrators can make informed decisions to potentially halt or curtail the conversation, thereby mitigating the risk of further toxicity spreading. This proactive intervention can be a valuable step toward maintaining a healthier and more constructive online environment.
Fig. 6. Confusion matrix for the leaf node prediction from 100 toxic trees
5
Conclusion and Future Works
The prevalence and impact of harmful content on Reddit, such as abusive language and hate speech, is a concerning issue. This paper addresses the challenges in detecting and removing toxic comments and presents a tree structure approach to aid policymakers in the early detection and redirection of harmful content, mitigating its impact on Reddit communities. Our observations indicate that toxic conversations often end with harmful remarks, leading to decreased engagement from users. However, these discussions tend to generate broader, more profound, and more extensive branches, suggesting that toxic interactions can have a long-lasting impact and influence
410
N. Yousefi et al.
on the behavior of other users. Moreover, our analysis shows that it is possible to predict the toxicity of the next response by considering the toxicity scores of the previous two comments and the overall context. This discovery carries significant implications for the advancement of moderation tools that can aid online platforms in effectively identifying and preventing toxic interactions. In our future work, we will consider the class imbalance of toxic and nontoxic comments and try to imply different techniques to balance the dataset before applying machine learning algorithms. Moreover, we will also take into consideration what might be the toxicity of any following replies based on all previous replies. We also will examine which comments impact upcoming replies in a conversation thread so that they can be removed from social media to make safer places for all users. Acknowledgement. This research is funded in part by the U.S. National Science Foundation (OIA-1946391, OIA-1920920, IIS-1636933, ACI-1429160, and IIS1110868), U.S. Office of the Under Secretary of Defense for Research and Engineering (FA9550-22-1-0332), U.S. Office of Naval Research (N00014-10-1-0091, N00014-141-0489, N00014-15-P-1187, N00014-16-1-2016, N00014-16-1-2412, N00014-17-1-2675, N00014-17-1-2605, N68335-19-C-0359, N00014-19-1-2336, N68335-20-C-0540, N0001421-1-2121, N00014-21-1-2765, N00014-22-1-2318), U.S. Air Force Research Laboratory, U.S. Army Research Office (W911NF-20-1-0262, W911NF-16-1-0189, W911NF23-1-0011), U.S. Defense Advanced Research Projects Agency (W31P4Q-17-C-0059), Arkansas Research Alliance, the Jerry L. Maulden/Entergy Endowment at the University of Arkansas at Little Rock, and the Australian Department of Defense Strategic Policy Grants Program (SPGP) (award number: 2020-106-094). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding organizations. The researchers gratefully acknowledge the support.
References 1. Cheng, J., Bernstein, M., Danescu-Niculescu-Mizil, C., Leskovec, J.: Any- one can become a troll: causes of trolling behavior in online discussions. In: Proceedings of ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW17), ACM Press, pp. 1217 (2017) 2. Yousefi, N., Noor, N.B., Spann, B., Agarwal, N.: Towards Developing a Measure to Access Contagiousness of Toxic Tweets. In: TrueHealth 2023, Workshop on Combating Health Misinformation for Social Wellbeing, In press (2023) 3. Noor, N.B.: Toxicity and Redditv: A study of harmful effects on user engagement and community health (Order No. 30423680). Available from Dissertations & Theses @ the University of Arkansas at Little Rock, (2806341066) (2023) 4. Sahana, B.S., Sandhya, G., Tanuja, R.S., Sushma Ellur, Ajina, A.: Towards a safer conversation space: detection of toxic content in social media (student consortium). In: IEEE Sixth International Conference on Multimedia Big Data (BigMM), pp. 297-301. IEEE (2020) 5. Taleb, M., Hamza, A., Zouitni, M., Burmani, N., Lafkiar, S., En-Nahnahi, N.: Detection of toxicity in social media based on Natural Language Processing methods. In: International Conference on Intelligent Systems and Computer Vision (ISCV), pp. 1-7. IEEE (2022)
Examining Toxicity’s Impact on Reddit Conversations
411
6. Kumar, A.K., and Kanisha, B.: Analysis of multiple toxicities using ML algorithms to detect toxic comments. In: 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), pp. 1561-1566. IEEE (2022) 7. Noor, N.B., Yousefi, N., Spann, B., Agarwal, N.: Comparing toxicity across social media platforms for COVID-19 discourse. In: The Ninth International Conference on Human and Social Analytics (2023) 8. DiCicco, k., Noor, N.B., Yousefi, N., Spann, B., Maleki, M., Agarwal, N.: Toxicity and networks of COVID-19 discourse communities: a tale of two media platforms. In: The 3rd Workshop on Reducing Online Misinformation through Credible Information Retrieval (2023) 9. Saveski, M,. Roy, B., Roy, D.: The structure of toxic conversations on Twitter. In: Proceedings of the Web Conference 2021, pp. 1086-1097 (2021) 10. Coletto, M., Garimella, K., Gionis, A., Lucchese, C.: Automatic controversy detection in social media: a content-independent motif-based approach. In: Online Social Networks, and Media, Vol. 3-4, pp. 22-31, ISSN 2468-6964 (2017) 11. Backstrom, L., Kleinberg, J., Lee, L., Danescu-Niculescu-Mizil, C.: Characterizing and curating conversation threads: expansion, focus, volume, re-entry. In: Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, pp. 13-22 (2013) 12. Hessel, J., & Lee, L.: Something’s brewing! Early prediction of controversy-causing posts from discussion features. In: arXiv preprint arXiv:1904.07372 (2019) 13. Rajadesingan, A., Resnick, P., Budak, C.: Quick, community-specific learning: How distinctive toxicity norms are maintained in political subreddits. In: Proceedings of the International AAAI Conference on Web and Social Media, Vol. 14, pp. 557-568 (2020) 14. Zhang, J., Danescu-Niculescu-Mizil, C., Sauper, C., Taylor, S.J.: Characterizing online public discussions through patterns of participant interactions. In: Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), pp. 1-27 (2018) 15. https://arstechnica.com/tech-policy/2021/09/reddit-bans-r-nonewnormal-andquarantines-54-covid-denial-subreddits/ 16. https://www.forbes.com/sites/carlieporterfield/2021/09/01/redditbans-controversial-covid-subreddit-after-users-protest-disinformation/? sh=16870c905a2a 17. https://github.com/unitaryai/detoxify
Analyzing Blogs About Uyghur Discourse Using Topic Induced Hyperlink Network Ifeanyichukwu Umoga, Stella Mbila-Uma(B) , Mustafa Alassad, and Nitin Agarwal COSMOS Research Center, University of Arkansas , Little Rock, AR, USA {ijumoga,smbilauma,mmalassad,nxagarwal}@ualr.edu http://www.cosmos.ualr.edu
Abstract. The Uyghurs are a Turkic ethnic group that descends from the general region of Central and East Asia. In recent years, the Chinese government has been accused of violating the human rights of Uyghur Muslims by imprisoning them in camps and subjecting them to forced labor, forced abortion, and other harmful treatments. Aside from Western news outlets, much information about the Uyghur issue is available on blogs and social media. The information found in these blogs provides an alternative lens to analyze the Uyghur discourse from nontraditional media outlets. Existing studies have analyzed different contents of blogs, such as the authors and comments. This research aims to employ a systematic approach to analyze information spread through hyperlinks from Uyghur-related blog posts in the Indo-Pacific region. We analyzed 318 blog posts and 5,598 hyperlinks published from January 2019 to September 2022 to achieve this. The analysis included topic modeling and toxicity analysis to identify the main topics in each blog post and their toxicity trends. The topics generated from the topic modeling were mapped onto a network of hyperlinks to develop a topic-induced network. This network helped examine, analyze, and visualize the network of blog hyperlinks that further revealed influential blog conduits raising awareness about the oppression of the Uyghur community. Keywords: Uyghur hyperlink network
1
· blogs · Hyperlinks · Indo-Pacific · Topic-induced
Introduction
The Uyghurs, a Turkic-speaking ethnic group concentrated in China’s Xinjiang region, have faced significant challenges [1,2]. In 2014, over a million Uyghur Muslims were reportedly detained in reeducation centers by Chinese authorities [3]. The Chinese government contends that these facilities, established under CCP leader Xi Jinping, aim to combat religious extremism and terrorism [4]. However, these actions have drawn criticism from numerous nations and human rights organizations for alleged human rights abuses [1,2]. Social media platforms, particularly blogs, have played a crucial role in shedding light on the c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 412–423, 2024. https://doi.org/10.1007/978-3-031-53503-1_34
Analyzing Blogs About Uyghur Discourse
413
persecution of Uyghur Muslims by the Chinese government [5]. Blog discourse on this issue has grown in volume and visibility as it has gained international attention. This has led to a more diverse range of voices and perspectives in the blogosphere, amplifying the experiences of Uyghur individuals and communities. Simultaneously, the Chinese government has attempted to manipulate online discussions through censorship and propaganda, making the blog discourse complex and multifaceted and reflecting broader political, social, and cultural dynamics. Previous works on the influence of blogs in diffusing information have focused on various aspects of the blogosphere, including the authors, the topics discussed, the comments, and the sentiments expressed in blog posts [7,8]. Kayes et al. attempted to identify influential bloggers in a blogging community using network centrality metrics [8]. M. C. Jenkins et al. also performed a content analysis of comments on parenting blog posts related to vaccination [7]. Hyperlinks are vital for disseminating information online and illustrating connections between different pieces of content [9]. Understanding their role can provide insights into how blog posts fit into larger networks of information and ideas. This study aims to create a methodical approach to detecting the spread of information by analyzing the hyperlinks from blog posts on the Uyghur discourse in the Indo-Pacific region. To achieve this, we use a variety of computational methods, such as topic modeling and toxicity analysis, to analyze this discourse. We extend this analysis by creating a topic-induced hyperlink network graph to create a pathway to visualize and investigate the network of blog hyperlinks. The study contributes to the field of information science and media studies, especially in analyzing computer-mediated communication. The remaining sections of this paper are arranged as follows: Sect. 2 presents a review of related works. Section 3 presents the methodology of our study. In Sect. 4, we present the results of our analysis. In Sect. 5, we discuss the consequences of our results and what further work could be done on this research.
2
Literature Review
Several researchers used a qualitative approach to analyze various contents in blog posts. This section reviews these related research works and highlights the methodologies used. Shaik et al. researched developing situational awareness from the blogosphere [10]. The authors focused on blogs about defense, trade, diplomacy, and other topics associated with China and Australia [10]. The researchers utilized LDA Topic modeling to identify various topics in the blog posts. A network analysis of the blogs was conducted to identify influential bloggers and which bloggers contributed to similar topics [10]. The influential bloggers in their research were the bloggers that contributed to multiple topics [10]. In conclusion, they observed that the influential bloggers were Australian, and the bloggers’ views on defense-related topics were pro-Australia [10]. Han Woo et al. attempted to utilize the knowledge and procedures of social network analysis and content analysis to discover hidden patterns in the emerging political blogosphere in South Korea [11]. The researchers conducted a hyperlink
414
I. Umoga et al.
network analysis of citizen blogs in South Korean politics [11]. To accomplish this goal, the researchers extracted personal blogs of elected members of South Korea’s National Party from their official websites. In addition, they studied characteristics of blogs frequently linked by politicians [11]. Their research discovered that the preferred hyperlink targets on politician blogs are most likely those belonging to citizens with strong political opinions. Similarly, Hossein et al. performed a hyperlink network analysis on different tourism sites in Western Australia [12]. This research discovered low connectivity in the network, and the communities formed were primarily based on geographical location [12]. Our work differs from existing works on blog networks by creating a topicinduced hyperlink network graph to investigate hyperlinks in blog posts about Uyghur Muslims. To accomplish this, we utilize various social computing techniques, including topic modeling, toxicity analysis, and network analysis, to draw insights into the subject matter.
3
Methodology
This section discusses the data used for this research and the methodology employed. 3.1
Data Collection
The data was gathered using a set of Indonesian and English keywords relating to the Uyghur discourse around the Indo-Pacific region. The blog tracker tool described by Hussain et al. was used to collect the data for this research [13]. The data contained 318 blog post URLs from January 2019 to September 2022. Each blog post URL was exported to a webpage crawling tool called Diffbot [14]. Diffbot utilizes machine learning algorithms to extract relevant data from web pages and return them in a structured format [14]. The data extracted from Diffbot includes the blog post and its hyperlinks. Table 1 shows a summary of the data collected. Table 1. Blog data statistics Data Elements
All Data
Total Blog Posts 318 Total Hyperlinks 5,598
3.2
Topic Modeling
Topic modeling is a key research method in machine learning and natural language processing for extracting information [15]. Topic modeling algorithms
Analyzing Blogs About Uyghur Discourse
415
allow for the automatic organization, search comprehension, and summarization of large and unstructured collections of documents [15]. In this research, the documents refer to the blog posts collected. The term ’topics’ refers to the unknown, variable relationships between words in a vocabulary and their occurrence in blog posts [15]. A blog post covers a variety of topics. Topic modeling defines the extent to which each blog post belongs to a topic group. It assigns a value ranging from 0 to 1 to that blog post in relation to the topic groups [15]. Finally, selecting the highest-distributed topics can identify the topic group(s) to which each blog post belongs [15]. For this research, we used the LDA (Latent Dirichlet Allocation) topic modeling method because it can handle long-length documents [32]. Before training the model, some text pre-processing steps were carried out, such as the removal of both English and Indonesian stop words, to ensure the accuracy of results [31]. 3.3
Toxicity
The term “online toxicity” refers to arrogant, violent, and degrading attitudes and behavior displayed on online platforms [16]. It can range from excessive use of profanity to outright hate speech. The Uyghur Muslim community has been subject to significant online toxicity, particularly in the form of hate speech [5]. Therefore, it is pertinent that we detect the toxicity level of each blog post around the Uyghur discourse. The Detoxify toxicity model created by Unitary AI was used to calculate the toxicity of each blog post [17]. Detoxify is multilingual and offers a probability score between 0–1 [17]. Finally, we used the topics generated from the LDA topic model to plot a toxicity trend graph across the period of data collection. 3.4
Network Analysis
Network analysis focuses on studying the relationships, patterns, linkage, and interactions among connected entities [18]. A network is made up of nodes and edges [18]. Nodes depict the entities being analyzed, whereas edges represent the connections between the nodes [18]. To analyze the information spread through hyperlinks within blog posts, we developed a topic-induced hyperlink network graph. The topic-induced hyperlink graph is a novel technique for visualizing and investigating hyperlinks in a blog network. The graph is a directed graph that consists of the highest-distributed topic in each blog post as the source node, the hyperlinks extracted from each blog post as the target node, and the number of hyperlink occurrences in a particular topic as the edge weight. For the network visualization, Gephi, an open-source network analysis, was used for the network visualization. Furthermore, we computed the modularity measure to identify distinct communities within the network. 3.5
Morality Analysis
Morality assessment was carried out to further analyze the results obtained from the above analysis to gain better insight. As shown in Table 2, according to the
416
I. Umoga et al.
theory, there exist six moral elements [30]. Each moral element has a vice and a virtue. This distribution of the vice-virtue component yields 12 moral elements, giving a more detailed output of result. The Empathetic Moral Foundations Dictionary (EMFD) package was utilized to carry out this analysis. The EMFD package is a tool used for sentiment analysis that specifically focuses on moral foundations. The distribution of morals in each blog post was calculated and visualized as shown in Fig. 4. Table 2. Moral foundation theory Virtue Care Vice
4
Fairness
Liberty
Loyalty
Authority
Sanctity
Harm Cheating Oppression Betrayal Subversion Degradation
Results
This section presents the findings of the blog analysis completed pertaining to the Uyghur discourse. 4.1
Topic Modeling
The Latent Dirichlet Allocation algorithm, a statistical method for topic modeling, was used to summarize the blog posts into five topic groups [15]. Table 2 displays the top seven words in each topic group. From the top words in Table 2, we observed that Topics 1 and 5 were not relevant to the Uyghur discourse. Topic 1 focused on the 2020 oil price war and Topic 5 centered on Russia’s deployment of nuclear weapons in Syria [19,20]. On the other hand, Topics 2, 3, and 4 had highly salient words that were relevant. Topic 2, with top words like ‘Uyghurs’, ‘Islam’, ‘Uyghur’, ‘Uighur’, ‘Muslims’, ‘Western’, and ‘camps’, appears to focus on the illegal detainment of Uyghur Muslims in camps and how the vast majority of people detained in the camps were never charged with a crime [5]. Further exploration of the blogs revealed that some human rights organizations claim that many Uyghur Muslims have been labeled as extremists simply for practicing their religion [21]. Topic 3 contained top words like ‘Tiongkok’, ’Uyghur’, ‘Cina’, ‘Israel’, ‘gaza’, ‘israeli’ and ‘amerika’. ‘Tiongkok’ and ‘Cina’ are both Indonesian words that can be translated to ‘China’, while ‘Amerika’ stands for ‘America’. We further researched blog posts found in Topic 3 to gain insights into their discourse and understand the topic better. The majority of the blog posts compared the oppression of the Uyghur Muslims in China to Israel’s treatment of the Palestinians in Gaza [22]. For example, a blog post opined that “Israel’s treatment of the Palestinians in the occupied territories is terrible and by now an overt process of ethnic cleansing. But Palestinians are not detained in reeducation camps and are not subject to the same degree of relentless surveillance and
Analyzing Blogs About Uyghur Discourse
417
cultural erasure. The Uighurs in Xinjiang have it worse”. Furthermore, Topic 4, with words such as ‘Uyghur’, ‘Turkey’, ‘Military’, ‘Turkish’, ‘Weapons’, ‘Year’, and ‘Erdogan’, talks about the Turkish Prime Minister, Erdogan, purchasing anti-missile systems from China, and keeping silent on the oppression of the Uyghur Muslims. 4.2
Toxicity Analysis
We performed toxicity analysis to gain further insights into the toxicity trend for each topic generated from the LDA output. The final toxicity scores were averaged monthly and then multiplied by topic distribution scores to calculate toxicity per topic. Figure 1 shows the toxicity trend from January 2019 to September 2022. From the toxicity trend, we noticed the toxicity for each topic group was generally low. However, a noticeable spike was observed in Topic 3 and Topic 4, which occurred in April and August 2019, respectively. During this time, Uyghur Muslims detained in Chinese camps were forced to consume pork during the holy month of Ramadan, which is prohibited in Islam [23].
Fig. 1. Uyghur Toxicity Trend Analysis
4.3
Network Analysis
These results and analysis show how using the topic-induced hyperlink presents ways to understand information spread through hyperlinks in the blogosphere. To analyze the spread of information through hyperlinks in blog posts around the Uyghur discourse, we developed a topic-induced hyperlink graph. To accomplish this, we extracted the highest-distributed topic for each blog post and mapped the blog post’s hyperlinks to the topic. As mentioned earlier, Topics 1 and 5 were not related to the Uyghur discourse, and as such, we filtered out the blog posts with the highest distribution of these topics. Furthermore, we analyzed the blog posts of the outstanding topics: Topics 2, 3, and 4. In the topic-induced hyperlink graph, the topic represents the source, the hyperlinks extracted from each blog post represent the target, and the number of occurrences of a hyperlink in a specific topic represents the edge weight. To further analyze hyperlinks in the blog network, we identified
418
I. Umoga et al. Table 3. Core Links and their Topics
Core Links
Topic Number of Occurrences Occurrences
https://1.bp.blogspot.com/-HkjkRClT2jQ/Xh i8JxsxAI/ AAAAAAAAszw/fP7IwA7ilBUue9wzE1DZuNJ8Wmez64zQCLcBGAsYHQ/s1600/Derita %2BMihrigul%2BTursun.jpg
4
6
https://1.bp.blogspot.com/-q-P0mvkHKBg/Xh jMGYAiII/ AAAAAAAAsz0/ wuC8AxiSJgAVXxXflQNaIIdN8J3fRndHgCLcBGAsYHQ/ s1600/Keluarga%2BMihrigul%2BTursun.jpg
4
6
https://www.unz.com/article/uyghurs-political-islam-thebri/
3
28
https://niqnaq.files.wordpress.com/2020/12/4e44a499-8f9149f4-a222-0bd4b38086d4.png
3
12
https://www.al-monitor.com/pulse/originals/2020/03/ pentagon-us-not-retaliate-turkey-syria.html
2
18
https://fpif.org/the-world-needs-a-water-treaty/
2
14
hyperlinks with the most occurrences in a specific topic and called them ‘Core links’. Table 3 shows the “Core links” and their topics. The core links were further investigated to determine if they were relevant and to provide more insight to the Uyghur discourse. Our findings showed that some of the hyperlinks had no relevance in the Uyghur discourse. For example, the core links in Topic 2 were hyperlinks redirecting to articles that focused on the US’s decision not to take action against Syria for its attack against Turkish soldiers [24] and on the need for a global water agreement [25]. Meanwhile, some other hyperlinks were related to Uyghur discourse. In Topic 3, one of the core links was a link to an article that was published in The Unz Review about the Uyghurs, Political Islam, and the Belt Road Initiative (BRI) [26]. Also, the core links in Topic 4 stood out in our research. One of the links contained an image (see Fig. 2) of a woman wearing a hijab holding two kids. The second link contained a cartoonish image (see Fig. 2) of a woman wearing a hijab and shackles on her hand, with the caption ‘What has happened to me, A testimony of an Uyghur woman’. We further researched the relationship between these two images and their relevance to the Uyghur discourse. From the research, we discovered the two images were related to Mehrigul Tursun, a reported former Uyghur detainee from Xinjiang [27]. The cartoonish image was from a Japanese comic written by Tomomi Shimziu about the testimony of Mehrigul Tursun [27]. Mehrigul Tursun said that she was brought into Chinese authorities’ custody on numerous occasions, tortured while detained in one of the “re-education centers” for Uyghurs, and lost one of her sons while in their custody in 2015 [28]. Her testimony received extensive media coverage worldwide [28]. However, the Chinese Government denied Tursun’s allegations and gave its own version of what happened, claiming that Mehrigul Tursun was never jailed
Analyzing Blogs About Uyghur Discourse
or put in any camp [28].
419
From the topic-induced hyperlink network, shown in
Fig. 2. Mehrigurl Tursun with her two surviving children and front page of a Japanese comic about her
Fig. 3, we discovered that most nodes in the green community were hyperlinks extracted from blogs, with Topic 4 as the highest distributed topic for that community. For the purple and orange communities, which represent Topic 2 and Topic 3, respectively, most of the links were hyperlinks extracted from blog posts for the highest-distributed topics within those communities. We identified the nodes bridging the purple and orange communities in the graph. These bridging nodes represented hyperlinks linking to articles about the United Nations’ failure to report the Chinese government’s internment camps for Uyghur Muslims and about how some Uyghur refugees entered Turkey using forged passports [29,30]. The result of topic modeling is a set of topics, each representing a cluster of words and phrases frequently occurring together in the text. In this case, the topic modeling was performed on a text data set, and the results revealed that three topics (Topics 2, 3, and 4) were relevant to the Uyghur discourse. This suggests that these topics were generated based on the content that was most closely related to the Uyghur community, and therefore these topics likely contain information specific to this group. Further analysis and interpretation of these topics could provide valuable insights into the Uyghur discourse. To gain additional insights into the discourse of the topics, we evaluated the toxicity using the Toxify package. This resulted in an overall low toxicity score for these topics, which implies that these topics have had limited hateful, abusive, and toxic content. In analyzing the spread of information via blogs, we graphed the topic-induced hyperlink network and discovered hyperlinks that linked to the testimony of a Uyghur detainee. 4.4
Morality Analysis
In our pursuit of deeper insights into our analysis, we conducted a moral assessment that remarkably aligns with the presence of hateful, abusive, and toxic con-
420
I. Umoga et al.
Fig. 3. Topic-Hyperlink Network for Uyghur Blog Data
Fig. 4. Uyghur Toxicity Trend analysis
tent, as visually represented in the graph below in Fig. 4. Evidently, the graph underscores “Harm” as the most prevalent trait. This strongly supports our point that the using the Topic-induced hyperlinks serves as an effective method for comprehending the dissemination of information within the blogosphere. 4.5
Network Analysis
These results and analysis show how using the topic-induced hyperlink presents ways to understand information spread through hyperlinks in the blogosphere.
Analyzing Blogs About Uyghur Discourse
5
421
Conclusion
In this study, we attempted to analyze the spread of information through hyperlinks in blog posts. We developed a methodical approach to analyze the hyperlinks of blog posts about the Uyghur discourse. Topic modeling was utilized to understand and summarize the blog posts. Based on the keywords of the computed topics, we found that only three of the five topics were related to Uyghur discourse, and as such, we filtered out the blog posts of the two outlier topics. This method can also be applied to a larger number of blog posts. We mapped the identified topics onto a network of hyperlinks to create the topic-induced hyperlink network. From this network, we noticed that some of the links were relevant to our study and some were not. We further researched the relevant links, which provided deeper insight into the Uyghur discourse. The links contained information that was covered by credible news outlets that attested to the validity of the information. The result of the analysis indicates most of the hyperlinks in the data set focused on the oppression of the Uyghur Muslims by the Chinese Government. Future work could improve this research by expanding the keywords to capture more blog posts about the Uyghur discourse. Additional methods could also be utilized to analyze the entities within the blog network. Acknowledgment. This research is funded in part by the U.S. National Science Foundation (OIA-1946391, OIA-1920920, IIS-1636933, ACI-1429160, and IIS1110868), U.S. Office of the Under Secretary of Defense for Research and Engineering (FA9550-22-1-0332), U.S. Office of Naval Research (N00014-10-1-0091, N00014-141-0489, N00014-15-P-1187, N00014-16-1-2016, N00014-16-1-2412, N00014-17-1-2675, N00014-17-1-2605, N68335-19-C-0359, N00014-19-1-2336, N68335-20-C-0540, N0001421-1-2121, N00014-21-1-2765, N00014-22-1-2318), U.S. Air Force Research Laboratory, U.S. Army Research Office (W911NF-20-1-0262, W911NF-16-1-0189, W911NF23-1-0011),U.S. Defense Advanced Research Projects Agency (W31P4Q-17-C-0059), Arkansas Research Alliance, the Jerry L. Maulden/Entergy Endowment at the University of Arkansas at Little Rock, and the Australian Department of Defense Strategic Policy Grants Program (SPGP) (award number: 2020-106-094). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding organizations. The researchers gratefully acknowledge the support.
References 1. Roberts, S.R.: The War on the Uyghurs: China’s Internal Campaign Against a Muslim Minority. Princeton University Press, Princeton (2020) 2. Smith, T.F., Waterman, M.S.: Identification of common molecular subsequences. J. Molec. Biol. 147(1), 195–197 (1981) 3. Rakhima, A., Satyawati, N.G.A.D.: Gross violations of human rights veiled within Xinjiang political reeducation camps. Kertha Patrika 41, 1 (2019). https://doi. org/10.24843/KP.2019.v41.i01.p01 4. Clarke, M.: China and the Uyghurs: the ‘Palestinization’ of Xinjiang?. Middle East Policy 22 (2015). https://doi.org/10.1111/mepo.12148
422
I. Umoga et al.
5. Clothey, R.A., Koku, E.F., Erkin, E., Emat, H.: A voice for the voiceless: online social activism in Uyghur language blogs and state control of the Internet in China. Inf. Commun. Soc. 19(6), 858–874 (2016). https://doi.org/10.1080/1369118X. 2015.1061577 6. Di Sotto, S., Viviani, M.: Health misinformation detection in the social web: an overview and a data science approach. Int. J. Environ. Res. Public Health 19(4), 2173 (2022). https://doi.org/10.3390/ijerph19042173 7. Jenkins, M.C., Moreno, M.A.: Vaccination discussion among parents on social media: a content analysis of comments on parenting blogs. J. Health Commun. 25(3), 232–242 (2020). https://doi.org/10.1080/10810730.2020.1737761 8. Akinnubi, A., Agarwal, N., Stine, Z., Oyedotun, S.: Analyzing online opinions and influence campaigns on blogs using BlogTracker. In: Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 309–312. ACM, Virtual Event Netherlands (2021). https://doi. org/10.1145/3487351.3489483 9. Sehgal, V., Peshin, A., Afroz, S., Farid, H.: Mutual hyperlinking among misinformation peddlers (2021). Accessed 30 Jan 2023. http://arxiv.org/abs/2104.11694 10. Shaik, M., Hussain, M., Stine, Z., Agarwal, N.: Developing situational awareness from blogosphere: an Australian case study (2021) 11. Park, H.W., Jankowski, N.W.: A hyperlink network analysis of citizen blogs in South Korean Politics. Javn. - Public 15(2), 57–74 (2008). https://doi.org/10. 1080/13183222.2008.11008970 12. Raisi, H., Baggio, R., Barratt-Pugh, L., Willson, G.: Hyperlink network analysis of a tourism destination. J. Travel Res. 57(5), 671–686 (2018). https://doi.org/10. 1177/0047287517708256 13. Akinnubi, A., Agarwal, N.: Blog data analytics using blogtrackers (2023) 14. J. Ortiz Costa and A. Kulkarni, ”Leveraging Knowledge Graph for Open-Domain Question Answering,” in 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), Dec. 2018, pp. 389–394. https://doi.org/10.1109/WI.2018.00-63 15. Tong, Z., Zhang, H.: A text mining research based on LDA topic modelling. In: Computer Science Information Technology ( CS & IT ), pp. 201–210 (2016). https://doi.org/10.5121/csit.2016.60616 16. Marcoux, T., Obadimu, A., Agarwal, N.: Dynamics of online toxicity in the AsiaPacific region. In: Trends and Applications in Knowledge Discovery and Data Mining, Cham, pp. 80–87 (2020). https://doi.org/10.1007/978-3-030-60470-7 9 17. Hanu, L.: How well can we detoxify comments online? (2020). https://medium. com/unitary/how-well-can-we-detoxify-comments-online-bfffe5f716d7. Accessed 30 Jan 2023 18. Borgatti, S.P., Mehra, A., Brass, D.J., Labianca, G.: Network analysis in the social sciences. Science 323(5916), 892–895 (2009). https://doi.org/10.1126/science. 1165821 19. Maizland, L.: China’s Repression of Uyghurs in Xinjiang. Council on Foreign Relations. https://www.cfr.org/backgrounder/china-xinjiang-uyghursmuslims-repression-genocide-human-rights. Accessed 30 Jan 2023 20. Ibrahim, A.: Muslim Leaders Are Betraying the Uighurs - Foreign Policy. https://foreignpolicy.com/2019/07/08/muslim-leaders-are-betraying-theuighurs/. Accessed 30 Jan 2023 21. Wei, Y.: China’s Muslims Forced to Eat Pork During Ramadan (2020). https:// bitterwinter.org/chinas-muslims-forced-to-eat-pork-during-ramadan/. Accessed 30 Jan 2023
Analyzing Blogs About Uyghur Discourse
423
22. Detsch, J.: Pentagon says US will not retaliate for Turkey in Syria - Al-Monitor: Independent, trusted coverage of the Middle East. https://www.al-monitor.com/ originals/2020/03/pentagon-us-not-retaliate-turkey-syria.html. Accessed 30 Jan 2023 23. Certo, P.: The World Needs a Water Treaty. Foreign Policy In Focus (2019). https://fpif.org/the-world-needs-a-water-treaty/. Accessed 30 Jan 2023 24. Muslim Leaders Are Betraying the Uighurs - Foreign Policy. https://foreignpolicy. com/2019/07/08/muslim-leaders-are-betraying-the-uighurs/. Accessed 30 Jan 2023 25. Roberts, G.: Uyghurs, Political Islam & the BRI. The Unz Review (2019). https:// www.unz.com/article/uyghurs-political-islam-the-bri/. Accessed 30 Jan 2023 26. Shimizu, T.: What has happened to me A testimony of a Uyghur woman. Accessed 6 Oct 2023. https://note.com/tomomishimizu/n/n4cade047aed8 27. Watson, I., Westcott, B.: Uyghur refugee tells of death and fear inside China’s Xinjiang camps — CNN. https://www.cnn.com/2019/01/18/asia/uyghur-chinadetention-center-intl/index.html. Accessed 31 Jan 2023 28. Lee, P.: Deeper and Darker in the Uyghur-Turkish Passport Mystery. The Unz Review (2015). https://www.unz.com/plee/deeper-and-darker-in-theuyghur-turkish-passport-mystery/. Accessed 07 Oct 2023) 29. Norton, B., Singh, A.: No, the UN did not report China has ‘massive internment camps’ for Uighur Muslims., The Grayzone (2018). http://thegrayzone.com/2018/ 08/23/un-did-not-report-china-internment-camps-uighur-muslims/.. Accessed 31 Jan 2023 30. Mbila-Uma, S., Umoga, I., Alassad, M., Agarwal, N.: Conducting morality and emotion analysis on blog discourse. In: Takada, H., Marutschke, D.M., Alvarez, C., Inoue, T., Hayashi, Y., Hernandez-Leo, D. (eds.) International Conference on Collaboration Technologies, pp. 185–192. Springer, Cham (2023). https://doi.org/ 10.1007/978-3-031-42141-9 15 31. Nwana, L., Onyepunuka, U., Alassad, M., Agarwal, N.: Computational analysis of the belt and road initiative (BRI) discourse on Indonesian twitter. In: Takada, H., Marutschke, D.M., Alvarez, C., Inoue, T., Hayashi, Y., Hernandez-Leo, D. (eds.) International Conference on Collaboration Technologies, 22 August 2023, pp. 176– 184. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-42141-9 14 32. Shajari, S., Agarwal, N., Alassad, M.: Commenter Behavior Characterization on YouTube Channels. arXiv preprint arXiv:2304.07681 (2023)
Synchronization
Global Synchronization Measure Applied to Brain Signals Data Xhilda Dhamo1(B) , Eglantina Kalluçi1 , Gérard Dray2 , Coralie Reveille3 , Arnisa Sokoli1 , Stephane Perrey3 , Gregoire Bosselut3 , and Stefan Janaqi2 1 Department of Applied Mathematics, Faculty of Natural Sciences, University of Tirana,
Tirana, Albania [email protected] 2 EuroMov Digital Health in Motion, University of Montpellier, IMT Mines Ales, Ales, France 3 EuroMov Digital Health in Motion, University of Montpellier, IMT Mines Ales, Montpellier, France
Abstract. We investigate a method to asses brain synchronization in individuals who fulfill a cooperation task. Our input is a couple of signals from functional Near-Infrared Spectroscopy Data Acquisition and Pre-processing technology that is used to capture the brain activity of an individual by measuring the oxyhemoglobin (HbO) level. Then, we use the visibility graph approach to map each HbO signal into a network. We estimate the signal synchronization by studying a global measure, related to eigenvalues of Laplacian matrix, in each constructed visibility graph. We consider the autonomous evolution of one isolated node to be a Rössler function. Then, the synchronization of signals can be characterized by a little number of parameters that could be employed to classify the sources of signal. Unlike prior research in this area, our aim is to examine the circumstances in which synchronization occurs in various individuals and within different hemispheres of the prefrontal cortexes of the same individual. Experimental results show that the conditions for synchronization vary in different individuals, and they are different even for the distinct prefrontal cortical hemispheres of the same individual. Keywords: visibility graph · Laplacian · synchronization measure · brain synchronization
1 Introduction Network Science comes out as an efficient tool for identifying, representing and predicting distinct collective phenomena in numerous complex systems (Newman, 2018, Halvin et al. 2012, Chen et al. 2015, Arenas et al. 2008, Boccaletti et al. 2006, Barrat et al. 2008). Synchronization problem appears to be a hottest topic in dynamic processes in networks (Ding et al. 2020). It is defined as a process wherein many systems adjust a given property of their motion due to a suitable coupling configuration, or to an external forcing (Boccaletti et al. 2006). Synchronization processes are everywhere in nature and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 427–437, 2024. https://doi.org/10.1007/978-3-031-53503-1_35
428
X. Dhamo et al.
they have been applied to a wide variety of areas such as biology, ecology, climatology, sociology, technology, smart grid, secure communication, neurology and many more (Pikovsky et al. 2001, Osipov et al. 2007, Arenas et al. 2008, Dörfler & Francesco 2014, Jiruska et al. 2013). Visibility graphs present a powerful technique to study time series in the context of complex networks (Lacasa et al. 2008). These kinds of graphs were established as a method to map time series into networks (Lacasa et al. 2008) with the purpose of making use of the tools of network science to describe the structure and dynamics of time series. Explorations on visibility graphs are mainly focused on two different directions: (i) canonical dynamics such as stochastic and chaotic processes (Brú et al. 2017, Gonçalves et al. 2016, Lacasa et al. 2009, Luque et al. 2012, Luque et al. 2011); (ii) a feature extraction procedure to make statistical learning (Bhaduri & Ghosh 2016, Hou et al. 2016, Long et al. 2014). The implementation of visibility graph to neuroscience is in its beginning and has been limited to the analysis of the electroencephalogram (EEG) (Mira- Iglesias et al. 2016, Bhaduri & Ghosh 2016) and functional magnetic resonance imaging (fMRI) data (Sannino et al. 2017). In this paper we use visibility graphs to map functional Near-Infrared Spectroscopy Data Acquisition and Pre-processing (fNIRS) time series into networks and then study the synchronization dynamics in the constructed networks. fNIRS is a technology used to measure brain activity (Li et al. 2020). The HbO signals are captured by the optodes positioned in left and right hemispheres of the prefrontal cortices (PFC) of the participants in the experiment. Then, we identify a few parameters that permit to characterize the synchronization process. These parameters could be used further to classify the sources of signal. The analysis of the synchronization phenomena in these signals show that the synchronization does not happen under the same conditions for different participants in the experiment, but even for different PFC hemispheres of one participant. However, these are preliminary results and need to be compared to alternative methods. Recent research articles (Li et al. 2021, Wang et al. 2022) have outlined an approach to studying the synchronization of individuals’ brain when collaborate to complete a particular task. They propose using the Wavelet Transform Coherence (WTC) to identify locally phase locked behavior between two time- series by measuring cross- correlation between the time series as a function of frequency and time. Furthermore, they used sliding windows approach and k- mean clustering to obtain a set of representative interbrain network states during different communication tasks. Our experiment is performed only in one communication task and our goal in this paper is to compare the conditions under which synchronization is achieved in different individuals and also in different PFCs of the same participant. The rest of the paper is organized as follows. In the second section we define preliminaries on network theory, introduce the visibility graph and its properties and give the mathematics behind the synchronization measure through undirected, unweighted networks. The third section describes the generation of the data used in this study. In addition, we provide the reader with results obtained when studying and analyzing synchronization dynamics in brain activity data. Conclusion summarizes once more all the work conducted and results obtained from our analysis.
Global Synchronization Measure Applied to Brain Signals Data
429
2 Background and Methods 2.1 Preliminaries in Networks Throughout this paper we will refer to a graph (network) as a pair G = (V , E) where V is called the vertex set and E is the edge set. This study is focused only on undirected and unweighted networks with symmetric adjacency matrix A whose entries are aij = 1 if there exit a link between nodes i and j and 0 otherwise. The degree of one node i is |V | defined as ki = j=1 aij , i = 1, 2, . . . , |V | and the Laplacian matrix associated to G is L = D − A, where D = diag k1 , k2 , . . . , k|V | . It is known that L is positive semidefinite matrix and all its eigenvalues are real and non- negative. In addition, they can be ordered as 0 = λ1 < λ2 < λ3 < . . . < λN , where λ2 is known as the algebraic connectivity of the network (Newman 2018, Estrada and Knight 2015, Barabási & Pósfai 2016, Boccaletti et al. 2006). 2.2 The Visibility Graph The procedure to construct a visibility graph is described in detail in (Lacasa et al. 2008, 2009, 2012). Let’s consider a time series with N data measured at times ti , i = 1, 2,. . . , N with values xi , i = 1, 2, . . . ,N and consecutive time points (ti , xi ), (tk , xk ) and tj , xj . Time points (ti , xi ) and tj , xj are visible and consequently will become two connected nodes in the visibility graph if for any point (tk , xk ) between them, they fulfill the following inequation: tj − tk xk < xj + xi − xj (1) tj − ti The network constructed by the above condition has four main properties: it is connected, undirected, invariant under affine transformations of the series data and it can be applied to every kind of time series (Lacasa et al. 2008). The construction of visibility graph is illustrated schematically in Fig. 1 given a time series with N = 20.
Fig. 1. Construction of visibility graph corresponding to a univariate time series. Adapted from Lacasa at el. 2008.
430
X. Dhamo et al.
2.3 Synchronization Dynamic Synchronization is the process where many systems adjust a given property of their motion due to a suitable coupling configuration, or to an external forcing (Arenas et al. 2008, Boccaletti et al. 2006, Barrat et al. 2008). Let’s consider a system (connected network) formed by N identical m dimensional dynamical system (oscillators) whose states are represented by the vector X = {x1 , x2 , . . . , xN } where each of xi is a m-dimensional vector (xi stands for the nodes in the network). The evolution of the vector field xi could be determined by the ordinary differential equation xi = f (xi ) (f : Rm → Rm ). The equation of motion is defined as: xi = f (xi ) − σ
N
Lij h xj
(2)
j=1
where L is the Laplacian matrix, h : Rm → Rm is a vectorial output function and σ is the coupling strength. The completely synchronized state of this network with identical dynamics is computed as the solution of Eq. (2) where x1 (t) = x2 (t) = . . . xN (t) = s(t). In this synchronized state, s(t) is also solution of the equation s˙ = f (s). This subspace in the state space of Eq. (2) where all the oscillators evolve synchronously on the same solution of the isolated oscillator f is called the synchronization manifold. Once, all the oscillators are set at the synchronization manifold, they will remain synchronized and the most important topic now is to evaluate the stability of the synchronized manifold in presence of small perturbations δxi . The synchronized manifold is stable if the perturbations decay in time, otherwise it is not stable. The stability of the synchronized manifold is measured making use of the master stability function. We consider the evolution of small δxi as linear and write xi = s(t) + δxi . Furthermore, we write the Taylor series for the functions f and h as f (xi ) = f (s) + Jf (s)δxi and h(xi ) = h(s) + Jh(s)δxi , where Jf (s) and Jh(s) are the Jacobian matrices of the functions f and h on s and obtain the variational equations for δxi : δxi = Jf (s)δxi − σ Jh(s)
N
Lij δxj
(3)
j=1
Then, δx is projected into the eigenspace spanned by the eigenvectors of the Laplacian matrix and the linear variational equations are written below: ξl = Jf (s) − σ λl Jh(s) ξl (4) where ξl is the eigenmode corresponding to the eigenvalue λl of L. Since, all the variational equations have the same form, but only differ from the term α = σ λl , we can write the variational equations in vectorial form: ξ˙ = Jf (s) − αJh(s) ξ (5) To evaluate the stability of the synchronization manifold, one computes the largest Lyapunov exponent λmax as a function of α based on the master variational equation.
Global Synchronization Measure Applied to Brain Signals Data
431
The synchronization manifold will be stable for all α, which give a negative value for λmax . This approach is given in detail by (Pecora and Carroll, 1998). Therein one finds a better understanding of the Master Stability Function (MSF) and its three possible classes. Here, our intention is to determine α1 and α2 , so that σ λl > α(α1 < σ λl < α2 , l = 2, . . . , N ) in case of class II (III) MSF. Since, the eigenvalues of the Laplacian matrix are non-negative we can obtain the following inequalities: σ λ2 ≤ σ λ3 ≤ . . . ≤ σ λN > α(α1 < σ λ2 ≤ σ λ3 ≤ . . . ≤ σ λN < α2 ) The synchronization manifold s will be stable only for:
α λ2 α2 σ > 0 represents that matrix Ψ is positive definite. For (ψ) T ] a continuously differentiable function V (ψ) : Rn → R, let ∇V (ψ) = [ ∂V∂ψ
440
S. Zhang et al.
(ψ) T and ∇V (φ − ϕ) = [ ∂V∂ψ ] |ψ=φ−ϕ , where ψ, φ, ϕ ∈ Rn . SOS[x] denotes the set of multivariate SOS polynomials, i.e., ⎧ ⎫ k ⎪ ⎪ ⎪ ⎪ 2 n ⎨ ρ(x) = pi (x), ∀x ∈ R ⎬ . SOS[x] = ρ(x) ∈ R[x] i=1 ⎪ ⎪ ⎪ ⎪ ⎩ ⎭ pi (x) ∈ R[x], i = 1, . . . , k
Graph theory is an essential tool to describe the network topology, which is briefly introduced as follows. Let G = (V , E , A) be a weighted digraph with N nodes, where V = {v1 , . . . , vN } denotes the set of nodes and E ⊆ V × V denotes the set of edges between nodes. The weights of edges {vj , vk } are denoted by Ajk ≥ 0 and matrix A = (Ajk ) ∈ RN ×N represents the weighted adjacent matrix, which implies that the information dissemination from k-th node to j-th node. The Laplacian matrix L = (Ljk ) ∈ RN ×N is defined by Ljk = −Ajk for N j = k and Ljj = k=1,k=j Ajk . The digraph G is said to be strongly connected if there exists a directed path connecting any two distinct nodes vi and vj with i, j = 1, . . . , N . In this article, we suppose that digraph G is strongly connected, i.e., L is an irreducible matrix. Consider a dynamical network with N nonlinearly coupling nodes, where each node is an n-dimensional dynamical system. The mathematic model of the network is given as follows: x˙ i (t) = f (xi (t)) − σ
N
Lij h(xj (t)),
(1)
j=1
with i = 1, . . . , N , where xi = [xi1 , . . . , xin ]T ∈ Rn is the state vector of node i, f (xi ) = [f1 (xi ), . . . , fn (xi )]T ∈ Rn → Rn and h(xj ) = [h1 (xj ), . . . , hn (xj )]T ∈ Rn → Rn are nonlinear functions, σ is the coupling strength, L = (Lij ) ∈ RN ×N is the Laplacian matrix of digraph G , which depicts the coupling configuration with the following properties: the off-diagonal entries Lij ≤ 0 for i = j and the N diagonal entries Lii = − j=1,j=i Lij . Definition 1. If for any > 0, there exists a T > 0 such that ||xi (t)−xj (t)|| ≤ for arbitrary initial conditions and all t > T , i, j = 1, . . . , N , then network (1) is said to achieve synchronization. Lemma 1 ([23]). The Laplacian matrix L of digraph G has an eigenvalue 0 with algebraic multiplicity 1, and all the other eigenvalues have positive real parts. The left eigenvector ζ = [ζ1 , . . . , ζN ]T of the Laplacian matrix L corresponding N to the eigenvalue 0 with multiplicity 1 satisfies ζ T L = 0, where i=1 ζi = 1 with ζi > 0 for i = 1, . . . , N . Lemma 2 ([24]). The general algebraic connectivity of digraph G is denoted as aζ (L) =
min
xT ζ=0,x=0
˜ xT Lx , T x Ξx
˜ = (ΞL + L Ξ)/2, and Ξ = diag{ζ1 , . . . , ζN }. where L
(2)
Synchronization Analysis and Verification for CNSs
441
Lemma 3 ([25]). If Λ = (Λij ) ∈ RN ×N is a symmetric Laplacian matrix, then N N
αiT Λij βj
i=1 j=1
=−
N −1
N
(αi − αj )T Λij (βi − βj ),
(3)
i=1 j=i+1
where αi ∈ Rn and βj ∈ Rn for i, j = 1, . . . , N .
3
Synchronization Analysis and Verification
In this section, we will discuss the synchronization problem of network (1) associated with digraph G . A sufficient condition is proposed by constructing the GLFs to guarantee synchronization, and the PLFs are found by SOS programming to achieve synchronization verification. To begin with, the following assumptions about f (·) and h(·) is given. Assumption 1. For f (·) : Rn → Rn and h(·) : Rn → Rn , there exist a continuously differentiable, radially unbounded, positive definite function V (·) : Rn → R≥0 , a vector valued function q(·) : Rn → Rm , and a scalar ϑ ∈ R such that for ∀i, j = 1, . . . , N , ¯, xj − x ¯)Gf (xi , xj ) G∇V (xi − x ≤ ϑGq,W (xi − x ¯, xj − x ¯), N N
(4)
∇V (xi − x ¯)Lij Gh (xj , x ¯)
i=1 j=1
≥
N N
q T (xi − x ¯)W Lij q(xj − x ¯),
(5)
i=1 j=1
hold, where G∇V (a, b) = ∇V (a) − ∇V (b), Gq,W (a, b) = [q(a) − q(b)]T W [q(a) − q(b)] for any a, b ∈ Rn , and W ∈ Rm×m is a positive semi-definite matrix. Remark 1. Note that if V (x) = 12 xT P x, q(x) = x, W = P Γ and h(x) = Γ x, there exists the Lipschitz-like constant ϑ ∈ R such that (x − y)T P (f (x) − f (y)) ≤ ϑ(x − y)T P Γ (x − y)
(6)
for any x, y ∈ Rn , that is, the condition (4) is directly weakened to the Lipschitzlike condition (6), where P is a positive definite matrix and Γ is a positive semidefinite matrix. The Lipschitz-like condition (6) is usually used for the research of the synchronization problem based on the QLFs (e.g., [12,13,24,25]). Also, it is simple to check that the condition (5) always holds in this case, thus the constraint on h(·) is reasonable.
442
S. Zhang et al.
Theorem 1. Suppose that Assumption 1 holds. Network (1) achieves synchronization if ϑ − σaζ (L) < 0. Proof. See Appendix. Remark 2. The synchronization criterion is established for network (1) under the directed topology via the GLFs beyond the quadratic form. As discussed above, Assumption 1 can be regarded as a natural relaxation of the Lipschitzlike condition (6) and the GLFs are also able to be reduced to the common QLFs. Compared with [22], we do not need to introduce the strict equality constraint to derive the synchronization criterion. More specifically, if graph G is undirected, Theorem 1 can be reduced to the results in our previous work [21]. A framework of GLFs is presented to analyze the synchronization problem of network (1) with the nonlinear coupling. It is noteworthy to mention that we presuppose that the continuously differentiable function V (·) exists, so we have to find such a function to certify the rationality of our method. However, the key difficulty in computation of V (·) involving multivariate non-negativity conditions is short of valid computational methods. To the best of our knowledge, the nonnegativity conditions (4) and (5) are in inherent complexity, and checking the non-negativity of a polynomial is NP-hard when the degree of the polynomial is four [26]. Fortunately, SOS programming provides convex relaxations for a variety of computationally difficult optimization problems, which has been a powerful method for computing Lyapunov functions in control theory and engineering (e.g., [17,19,21]). Thus, a convenient approach is to use the SOS relaxations as an apposite substitute for the non-negativity constraints. For polynomial network (1), the non-negativity conditions can be characterized by the stricter SOS conditions. In this way, we can search for the PLFs via SOS programming, for which the powerful algorithm and off-the-shelf software are available. Assumption 2. For f (·) ∈ R[·] : Rn → Rn and h(·) ∈ R[·] : Rn → Rn , there exist a continuously differentiable SOS polynomial function V (·) : Rn → R≥0 , a vector-valued function of polynomial q(·) : Rn → Rm , and a scalar ϑ ∈ R such that for ∀i, j = 1, . . . , N , there hold ¯, xj − x ¯)Gf (xi , xj ) −G∇V (xi − x + ϑGq,W (xi − x ¯, xj − x ¯) ∈ SOS(x), N N
(7)
∇V (xi − x ¯)Lij Gh (xj , x ¯)
i=1 j=1
−
N N
q T (xi − x ¯)W Lij q(xj − x ¯) ∈ SOS(x).
(8)
i=1 j=1
Corollary 1. Suppose that Assumption 2 holds. Network (1) achieves synchronization if ϑ − σaζ (L) < 0.
Synchronization Analysis and Verification for CNSs
443
Remark 3. For the SOS conditions (7) and (8), we know that q(·) and W are both unknown. So we presume that qˆ(·) is a vector-valued function of monomial, ˆ qˆ. Then, we only need to solve the positive semi-definite and let q T W q = qˆT W ˆ . In addition, (7) still relates to a bilinear semi-definite programming matrix W problem and thus is NP-hard. In order to reduce the computational complexity, we can preset the initial value of ϑ to avoid the bilinear problem, which eventually falls within the convex optimization framework, and the convex optimization is a computationally tractable problem. As a consequence, we can use the YALMIP toolbox [27] and MOSEK solver [28] to solve it, which is an efficient optimization software to solve the aforementioned SOS programming problem.
4
Simulation
In this section, we present a numerical example to illustrate the performance of our proposed method. The PLFs are found via YALMIP, equipped with the solver MOSEK, to achieve synchronization verification, which also indicates that our results are less conservative in comparison with the QLFs methods. Consider a Lorenz system with three chaotic oscillators [25,29], each of which is described as: ⎤ ⎡ a(xi2 − xi1 ) (9) f (xi ) = ⎣ cxi1 − xi2 − xi1 xi3 ⎦ xi1 xi2 − bxi3 with i = 1, 2, 3, where a = 10, b = 83 and c = 28. The nonlinear coupling function is taken as h(xi ) = [10(xi1 + 2x3i1 ), 10(xi2 + x3i2 ), 10(xi3 + 2x3i3 )]T . The initial values of xi (t) are chosen as x1 (0) = [2, 0, 5]T , x2 (0) = [−2, 8, 3]T and x3 (0) = [9, 4, 7]T , and the Laplacian matrix L = [4, −2, −2; −1, 2, −1; −3, 0, 3]. q) = According to Eq. (2), we have aζ (L) = 2.7679. Choose deg(V ) = 4, deg(ˆ 3, and qˆ = [x1 , x2 , x3 , x1 x2 , x1 x3 , x2 x3 , x21 , x22 , x23 , x1 x22 , x1 x2 x3 , x1 x23 , x21 x2 , x21 x3 , x31 , x2 x23 , x22 x3 , x32 , x33 ]T ∈ R19 , where deg(·) denotes the degree in variables given by the argument. By solving the SOS conditions (7) and (8), the constant ϑ and quartic PLF V (·) are found via MOSEK to achieve synchronization verification, where ϑ = 2.0 and V (x1 , x2 , x3 ) = 2.56236x41 + 0.93299x42 + 1.46639x43 + 1.00369x21 x22 + 0.937965x21 x23 + 0.802985x22 x23 − 0.002052x33 − 0.001065x1 x32 − 0.000228x21 x3 + 6.57385x21 + 2.59259x22 + 2.55881x23 − 0.011153x1 x2 . Afterwards, it is seen from Theorem 1 that system (9) realizes synchronization if the coupling strength satisfies σ > ϑ/aζ (L) = 0.723. As shown in Fig. 1, the state trajectories of system (9) with the coupling strength σ = 0.8 ultimately realize synchronization, which validates our analytical results. Compared with the synchronization criteria on basis of the QLFs methods (e.g., [13,24,25]), the synchronization criteria we established are less conservative, which have a smaller lower bound of coupling strength σ to guarantee synchronization.
444
S. Zhang et al.
Fig. 1. Synchronization trajectories of xi (t) for system (9).
5
Conclusion
In this article, we have investigated the synchronization problem of the nonlinear networked systems under the directed topology based on the construction of GLFs. For polynomial systems, we have searched for the less conservative PLFs via SOS programming approach to achieve synchronization analysis and verification. Finally, a simulation example of Lorenz system has been given to demonstrate the effectiveness of the theoretical results. In realistic networks, the uncertainty and noise owing to the nature or human disturbance cannot be neglected. In the future, our focus is on utilizing the proposed SOS programming approach to automatically find the PLFs and verify the synchronization of the stochastic networked systems. Acknowledgement. The authors gratefully acknowledge the support from the following foundations: the National Natural Science Foundation of China under Grant 61873017 and Grant 61973017, the Academic Excellence Foundation of BUAA for PhD Students, and the Outstanding Research Project of Shen Yuan Honors College, BUAA under Grant 230121101 and Grant 230122104.
Synchronization Analysis and Verification for CNSs
445
Appendix. Proof of Theorem 1 N N Let x ¯ = r=1 ζr xr and V˜ = i=1 ζi V (xi − x ¯), where V is defined in Assumption 1. Then, we have x ¯˙ =
N
ζr f (xr ) − σ
N
r=1
ζr
r=1
N
Lrj h(xj ).
(10)
j=1
It follows from Lemma 1 that ζ = [ζ1 , ..., ζN ]T is the left eigenvector of L corresponding to the eigenvalue 0, then we have ζT L =
N
ζr Lrj = 0,
(11)
Lrj h(xj ) = 0.
(12)
r=1
i.e., N
ζr
r=1
N j=1
Then, we have x˙i − x ¯˙ = f (xi ) −
N
ζr f (xr ) − σ
r=1
N
Lij h(xj ).
(13)
j=1
We now show that V˜ is a GLF for network (1). Taking the derivative of V˜ , yields that N
V˜˙ = ζi ∇V (xi − x ¯)(x˙i − x ¯˙ ) i=1
=
N
ζi ∇V (xi − x ¯)(f (xi ) − N
ζi ∇V (xi − x ¯)
i=1
=
ζr f (xr ))
r=1
i=1
−σ
N
N N
N
Lij h(xj )
j=1
∇V (xi − x ¯)Uir f (xr )
i=1 r=1
−σ
N
ζi ∇V (xi − x ¯)
i=1
=−
N −1
N
Lij (h(xj ) − h(¯ x))
j=1 N
G∇V (xi − x ¯, xr − x ¯)Uir Gf (xi , xr )
i=1 r=i+1
−σ
N i=1
ζi ∇V (xi − x ¯)
N j=1
Lij Gh (xj , x ¯),
(14)
446
S. Zhang et al.
where U = (Uir ) ∈ RN ×N = Ξ − ζζ T , which is a symmetric matrix with zerosum rows and negative off-diagonal entries satisfying Lemma 3. According to (4) and (5), we have N −1 N V˜˙ ≤ −ϑ Uir Gq,W (xi − x ¯, xr − x ¯) i=1 r=i+1
−σ
N N
ζi Lij q T (xi − x ¯)W q(xj − x ¯)
i=1 j=1
=ϑ
N N
Uir q T (xi − x ¯)W q(xr − x ¯)
i=1 r=1
−σ
N N
˜ ij q T (xi − x L ¯)W q(xj − x ¯)
i=1 j=1 T
= ϑQ (x − x ¯)(U ⊗ W )Q(x − x ¯) T ˜ ¯)(L ⊗ W )Q(x − x ¯) − σQ (x − x ˜ ⊗ W )Q(x − x ¯)((ϑΞ − σ L) ¯), ≤ QT (x − x
(15)
where Q(x − x ¯) = [q T (x1 − x ¯), . . . , q T (xN − x ¯)]T . Based on Eq. (2), one sees that ˙ V˜ ≤ (ϑ − σaζ (L))QT (x − x ¯)(Ξ ⊗ W )Q(x − x ¯). Thus, if ϑ − σaζ (L) < 0 holds, ˙ ˜ we have V < 0, which indicates that the synchronization of network (1) under the directed topology is achieved.
References 1. Arenas, A., Diaz-Guilera, A., Kurths, J., Moreno, Y., Zhou, C.: Synchronization in complex networks. Phys. Rep. 469(3), 93–153 (2008) 2. Cornelius, S.P., Kath, W.L., Motter, A.E.: Realistic control of network dynamics. Nat. Commun. 4, 1–11 (2013) 3. Brask, J.B., Ellis, S., Croft, D.P.: Animal social networks: an introduction for complex systems scientists. J. Complex Netw. 9(2), 1–19 (2021) 4. Tanner, H.G., Jadbabaie, A., Pappas, G.J.: Flocking in fixed and switching networks. IEEE Trans. Autom. Control 52(5), 863–868 (2007) 5. Vicsek, T., Czi´ ok, A., Ben-Jacob, E., Cohen, I., Shochet, O.: Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75(6), 1226–1229 (1995) 6. Panteley, E., Loria, A.: Synchronization and dynamic consensus of heterogeneous networked systems. IEEE Trans. Autom. Control 62(8), 3758–3773 (2017) 7. Wang, L., Chen, M.Z.Q., Wang, Q.-G.: Bounded synchronization of a heterogeneous complex switched network. Automatica 56(6), 19–24 (2015) 8. Stilwell, D., Bollt, E., Roberson, D.: Sufficient conditions for fast switching synchronization in time-varying network topologies. SIAM J. Appl. Dyn. Syst. 5(1), 140–156 (2006)
Synchronization Analysis and Verification for CNSs
447
9. Qin, J., Ma, Q., Yu, X., Wang, L.: On synchronization of dynamical systems over directed switching topologies: an algebraic and geometric perspective. IEEE Trans. Autom. Control 65(12), 5083–5098 (2020) 10. He, W., et al.: Multiagent systems on multilayer networks: synchronization analysis and network design. IEEE Trans. Syst., Man, Cybern., Syst. 47(7), 1655–1667 (2017) 11. Belykh, V.N., Belykh, I.V., Hasler, M.: Connection graph stability method for synchronization of coupled chaotic systems. Phys. D 195(1–2), 159–187 (2004) 12. DeLellis, P., di Bernardo, M., Russo, G.: On QUAD, Lipschitz, and contracting vector fields for consensus and synchronization of networks. IEEE Trans. Circ. Syst. I, Reg. Papers 58(3), 576–583 (2011) 13. Wen, G., Duan, Z., Chen, G. Yu, W.: Consensus tracking of multi-agent systems with Lipschitz-type node dynamics and switching topologies. IEEE Trans. Circuits Syst. I, Reg. Papers 61(2), 499–511 (2014) 14. Zhang, Z., Yan, W., Li, H.: Distributed optimal control for linear multiagent systems on general digraphs. IEEE Trans. Autom. Control 66(1), 322–328 (2021) 15. Xiang, J., Chen, G.: On the V-stability of complex dynamical networks. Automatica 43(6), 1049–1057 (2007) 16. Zhao, J., Hill, D.J., Liu, T.: Stability of dynamical networks with non-identical nodes: a multiple V-Lyapunov function method. Automatica 47(12), 2615–2625 (2011) 17. Papachristodoulou, A., Prajna, S.: On the construction of Lyapunov functions using the sum of squares decomposition. In Proceedings of the IEEE Conference on Decision and Control, Las Vega, Nevada USA (2002) 18. Papachristodoulou, A., Prajna, S.: A tutorial on sum of squares techniques for systems analysis. In: Proceedings of the American Control Conference, Portland, USA (2005) 19. Zhang, S., Song, S., Wang, L., Xue, B.: Stability verification for heterogeneous complex networks via iterative SOS programming. IEEE Control Syst. Lett. 7, 559–564 (2023) 20. Liang, Q., She, Z., Wang, L., Su, H.: General Lyapunov functions for consensus of nonlinear multiagent systems. IEEE Trans. Circuits Syst. II, Exp. Briefs 64(10), 1232–1236 (2017) 21. Zhang, S., Wang, L., Liang, Q., She, Z., Wang, Q.-G.: Polynomial Lyapunov functions for synchronization of nonlinearly coupled complex networks. IEEE Trans. Cybern. 52(3), 1812–1821 (2022) 22. Liang, Q., Ong, C., She, Z.: Sum-of-squares-based consensus verification for directed networks with nonlinear protocols. Int. J. Robust Nonlinear Control 30(4), 1719–1732 (2020) 23. Godsil, C., Royle, G.: Algebraic Graph Theory. Springer-Verlag, New York (2001) 24. Yu, W., Chen, G., Cao, M.: Consensus in directed networks of agents with nonlinear dynamics. IEEE Trans. Autom. Control 56(6), 1436–1441 (2011) 25. Liu, X., Chen, T.: Synchronization analysis for nonlinearly-coupled complex networks with an asymmetrical coupling matrix. Phy. A. 387(16–17), 4429–4439 (2008) 26. Papachristodoulou, A., Prajna, S.: Analysis of non-polynomial systems using the sum of squares decomposition, vol. 312, pp. 23–43. Springer (2005). https://doi. org/10.1007/10997703 2 27. L¨ ofberg, J.: YALMIP: a toolbox for modeling and optimization in MATLAB. In: IEEE International Symposium on Computer Aided Control Systems Design, Taipei, Taiwan (2004)
448
S. Zhang et al.
28. Mosek, A.: The MOSEK optimization toolbox for MATLAB manual, Version 7.1 (2015) 29. Lu, J., Chen, G., Cheng, D., Celikovsky, S.: Bridge the gap between the Lorenz system and the Chen system. Int. J. Bifurcat. Chaos 12(12), 2917–2926 (2002)
Tolerance-Based Disruption-Tolerant Consensus in Directed Networks Agathe Bouis(B) , Christopher Lowe, Ruaridh Clark, and Malcolm Macdonald University of Strathclyde, Glasgow, Scotland [email protected]
Abstract. This article addresses the problem of resilient consensus for multi-agent networks. Resilience is used here to distinguish disruptive agents from compliant agents which follow a given control law. We present an algorithm enabling efficient and resilient network consensus based on an inversion of the social dynamics of the Deffuant model with emotions. This is achieved through the exploitation of a dynamic tolerance linked to extremism and clustering, whereby agents filter out extreme non-standard opinions driving them away from consensus. This method is not dependent on prior knowledge of either the network topology or the number of disruptive agents, making it suitable for real-world applications where this information is typically unavailable. Keywords: consensus algorithm
1
· fault-tolerance · social dynamics
Introduction
Methods of network control are shifting from centralised to distributed to avoid communication overheads, and their associated latencies and single point failures [1,2]. New theory must cope with issues of security and resilience to faults and deliberate attacks against ever growing networks (both in number of nodes and connections). These methods are characterised by the realistic systems to which they are applied: systems with unknown numbers of assailants and unknown network topology. This paper presents a fully decentralised, adaptive consensus protocol handling the specifications of realistic networks using a social dynamics framework. The foundation of distributed control resides on consensus building, that is, for nodes within a system to reach a common shared state. Network resilience is commonly defined as the ability to cope with, and/or recover from disruptions within the network [3]. These malfunctions can be faults or deliberate attacks [4], which are manifested in this paper under the same form: undetectable faults. The simplest but most effective approach to obtain consensus is through mean-based computation; however, these methods are not resilient [4]. They must therefore be expanded to ensure fault-resilience; one such typical approach being the Mean-Subsequence-Reduced (MSR)-type methods [4,5]. Such methods ensure their fault-tolerance by removing a set number F of extreme values at c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 449–460, 2024. https://doi.org/10.1007/978-3-031-53503-1_37
450
A. Bouis et al.
each consensus step, enabling agents to reach an agreement ignoring disruptive values. This number F corresponds to the maximum possible number of faults that the system can cope with. By requiring knowledge of this (arbitrary) design value, MSR algorithms are ill-adapted to real life scenarios where the number of threats is generally unknown. Herein, a social dynamics modelling perspective is adopted to provide a dynamic solution to the consensus problem requiring no information that will adapt better to real-life scenarios. This approach is chosen based on the comparable format of the update rule of both consensus protocols and social dynamics models; where the update rule characterises the evolution (update) of agents’ opinion as a result of an interaction/having received state values from other agents. On the basis of these similar mechanisms of opinion/state value update, we postulate that, behaviours and features causing dissention in social networks are the same behaviours and features preventing consensus in synchronisation protocols. Thus, features of social dynamics resulting in consensus are analysed, extracted, and implemented in the context of a consensus protocol. Social dynamics models characterise the underlying behaviour of social interactions. That is, people in everyday life influence one another and adapt towards, or away from, the expressed opinions of the people they have interacted with. A model of interest is the one proposed by Deffuant et al. [6]. Deffuant’s Bounded confidence (BC) is a stochastic model of the evolution of continuous-valued opinions within a given range [7], where the range is usually set between –1 and 1 [8]; moderate opinions close to 0 and extreme ones approaching either -1 or 1. The framework can be understood as follows: two agents coming into contact must hold sufficiently close opinions in order to mutually influence one another. When encountering agents of radically different opinions, that is, when the difference of opinions is too large, there are no changes with respect to the opinions held prior to the communication. The threshold for the difference is known as the agent’s tolerance d. Denoting the opinion of agent i at a given time t as oi [t], the interactions of agent i with agent j can be described with the following equations: oi [t + 1] = oi [t] + η(oj [t] − oi [t])
if |oj [t] − oi [t]| < d
(1)
When implementing a fixed universal tolerance, the final distribution of opinions can be categorised as; consensus, when a single opinion dominates; polarisation, when two opinions emerge; and, fragmentation, when several opinion clusters can be observed [9]. The transition from one phase to the next has been linked to the universal tolerance threshold. A secondary dimension to improve the model’s realism was introduced by Sobkowicz. by deriving an explicit correlation between extreme opinions and low tolerance they explain the tendency of agents with extreme opinions to be stubborn while agents with opinions more in line with the average tend be more easily swayed towards extremism [9–11]. The latter point is one which can and has notably been exploited by bad actors to lead to disinformation, dissention, and fragmentation of opinions [9,12,13]. This behaviour is addressed
Tolerance-Based Resilient Consensus
451
by morphing the universal tolerance d into individual agent’s tolerance. These tolerances are then updated as a function of the agent’s own opinion with respect to other opinions as di = f (|oi |). The mechanism of belief diffusion is not solely observed in opinion dynamics, it also represents the foundations of consensus protocols in network wherein agents communicate with their (usually one-hop) neighbours to update their own state variables. The source of conflict in both model and protocol can be understood to stem from the same disruptive attitude. As observed by the rise in political extremism propelled by small but vocal minorities, in social environments, agents with extreme opinions compel other non-extreme agents (moderates) into becoming extremists themselves [14,15]. This paper exploits, and inverts, these mechanisms to design a consensus protocol resistant to faulty/disruptive actors (extremists), termed the Clusters Consensus Protocol, to enable resilient consensus to be reached and maintained despite a lack of awareness of network topology and number of disruptive agents. This is achieved through the exploitation of a dynamic tolerance linked to extremism and clustering, whereby agents filter out extreme non-standard opinions driving them away from consensus. This make the approach particularly suitable to real-world applications where there is limited, to no, external information and where synchronisation is critical.
2
Model Formulation
Let G(t) = (V, ε, A) be a digraph with a finite number of nodes (agents) V = 1, 2, . . . , N , a set of directed edges ε ⊆ V × V , and an adjacency matrix A = aij ∈ R(N ×N ) . Each directed edge (j, i) ∈ ε represents a directional link between the node pair (j, i), such that communication between the nodes is enabled. In the model described here, it is assumed that the nodes communicate synchronously at each given opportunity such that the edges model the flow of information between the two nodes. The set of nodes influencing node i, it’s in-neighbours, is defined Viin = j ∈ V : (j, i) ∈ ε while the nodes influenced by i, its out-neighbours, are in the set Viout = j ∈ V : (i, j) ∈ ε. The components of the adjacency matrix aij = 1 if j ∈ Vi , that is, if node i can receive information from node j, otherwise aij = 0 . It should also be noted that each node has access to its own value at each time step, thus making it consider its inclusive neighbourhood. As defined by LeBlanc [4], network robustness formalises the notion of redundancy of direct information exchange between the various subsets in the network. It is a term which captures the idea that there are enough nodes in every pair of non-disjoint sets within the network that have at least a given number of neighbours outside of their respective sets. That is, that each node receives enough information so that it will not be swayed by disruptive agents attempting to “contaminate” the subset to which the node belongs. We direct the reader towards [4] for details regarding this robustness calculation. The premise of the Cluster Consensus Protocol is an undetectable fault or attack scenario disrupting the system’s integrity [16]: in our context, the
452
A. Bouis et al.
faults or attack prevent the synchronisation of the different agents’ opinions. Cybersecurity-wise, this is understood as a “deceptive” attack as non-disruptive agents remain unaware of the presence of disruptive agents. As modelled in this paper, the disruptive agents do not follow the Cluster Consensus Protocol and instead adopt unknown protocols or values and broadcast them to prevent the network from reaching consensus [17] (details specified in the results section). To ensure that the protocol is useful and applicable, the properties typically used when discussing consensus protocols were set as design requirements. These properties (termination, integrity, agreement) were adapted to make the protocol as comprehensive as possible. In addition, with time synchronisation as the final goal of the protocol, a stabilisation property is introduced: 1. Termination: all cooperative agents decide on some value 2. Integrity: the agreed upon value is not dictated by disruptive agents. 3. Agreement: all cooperative agents must agree with each other within an error threshold (η) of each other. 4. Stabilisation: once an agreement is reached, all cooperative nodes remain in agreement.
3
Method: Clusters Consensus Protocol
The protocol introduced in this paper postulates that, if certain features in social environments are observed to cause conflict, then those same features can be adjusted to nullify or dampen extremist behaviours. Based on this principle, it is advanced that the exploitation and inversion of these mechanism can result in the design of a disruption-tolerant consensus protocol. The Clusters Consensus Protocol described below uses the mechanisms of social dynamics to implement a dynamic tolerance which uses the z-score, or standard score, of opinions as its main discriminant for filtering. The z-score represents a conversion of raw scores into normalised ones. Using the z-score calculated from a node’s local connections as the subject of the protocol allows for any range of input values to be considered (unlike the Deffuant model which has a fixed input range). The z-score also acts as a statistical indicator of where an agent’s opinion sits within a given distribution (made up of its local connections’ state values), quantifying how close an opinion is to the local mean. Thus, by replacing the raw opinion oi [t] with z-scores as the subject of the filtering, the mechanics of social dynamics can be used to ensure convergence. This approach also provides an elegant solution to the problem of filtering out extreme values while allowing for a level of adaptability dependent on the agent’s own opinion. The protocol is composed of two filters acting in series, both of which make use of the logic of z-scores. The first, absolute, filter removes outlier values so as to prevent them from affecting the final consensus value, that is, the value on which all agents end up converging. The second, dynamic, filter links agents’ own values and the dispersion of the state values received to create a dynamic tolerance for each agent. This tolerance, which changes at each time step, indicates which values are to be considered by the agent for its update at the next time step.
Tolerance-Based Resilient Consensus
453
Update is here used to indicate the mathematical process of agents taking the average of filtered values and changing their own state value in the direction of that mean. The Absolute Filter uses the z-score in its capacity as a statistical indicator to point out extreme values. Much like standard deviation, z-scores can theoretically take on any value. However, the larger the value, the lower the probability of occurrence. As such, z-scores values larger than 2 are highly improbable (being in the 95th percentile of a distribution) and can be considered to be outliers. These outliers will in-turn affect all of the distribution’s z-score, making them appear smaller as a result of the outlier value. This bias can result in disruptive agents with less extreme state values being considered for the update. Therefore, to prevent this bias, extreme values far from the mean can be pre-processed and filtered out by the absolute filter. The second element of the Clusters Consensus Protocol is the Dynamic Filter, which links each agents’ tolerances to their state value and to the level of clustering of the data received. The filter’s use of the z-score and its dynamism enables it to adapt to each agent’s state value and ensures that the agent converge during its update step, such that agents with extreme values will consider many of the opinions received for their update, while agents already sitting firmly on the average will remain where they are. 3.1
Protocol Rules
The synchronous Cluster Consensus Protocol implemented by cooperative agents is presented in Fig. 1.
Fig. 1. Cluster Consensus Flow Diagram, where the Clusters Consensus Protocol is completed simultaneous for all agents. The Stability Assessment is then performed on all new Agents State Values
The protocol begins with the Collection of Neighbour State Values. At each time step t ∈ N , each cooperative node i sends its state variable oi [t] to all of the nodes in its out-neighbourhood j ∈ Viout and in turn receives the state
454
A. Bouis et al.
variables oj [t] from its in-neighbours j ∈ Viin (cooperative and disruptive if they exist). Each agent i therefore receives an array of oj [t] , which they concatenate with their own state value, coalescing them into a single array as Oi [t] = (oi [t] oj [t] : j ∈ Viin ) i μO i ,
The mean mined from:
standard deviation
σiOi ,
of the concatenated array Oi [t] are deter|Oi |
1 i = o |Oi | j=1 j |Oi | i j=1 (oj − μi )
i μO i
σiOi =
(2)
|Oi |
(3)
(4)
σi
This in turn allows for the z-score to be calculated to be calculated as: i Oi [t] − μO i z-scorei (Oi ) = abs (5) σiOi These newly calculated z-scores are used to remove extreme values which bias the filtering process. Here, the Absolute Filter pre-processes the values whose z-scores are larger than 2, that is, whose values are 2 standard deviations away from the mean (95th percentile of the distribution tail). The absolute filtering is expressed as: (6) Fi [t] = Oi (z-scorei (Oi ) < 2)[t] The next step of the Clusters Consensus Protocol is the calculation of the Dynamic Filter Parameters. These include: the z-scoresi (Fi ) and coefficient i of variation cˆF i of the distribution of absolute filter values Fi [t]. Fi i To be calculated, the mean μF i and standard deviation σi of the absolute filter values Fi [t] (array of fi [t − 1]) must be determined from Eq. 3 and 4 respectively; using Fi [t] the rather than Oi [t]. Fi i Using the new values of μF i and σi , the z-scorei (Fi ) and coefficient of variation can be assessed using: i Fi [t] − μF i z-scorei (Fi ) = abs (7) |F | σi i σiFi (8) i μF i Once the filter parameters are set, the Dynamic Filter can be created as a function of agent i’s own z-score (z-scoreagent (Fi )) and the level of clustering of i the opinions received. This is expressed as 1+ˆcFi i agent (F ) z-score i i z-scorefi ilter (Fi ) = 2 · (9) 2 i cˆF i =
Tolerance-Based Resilient Consensus
455
Here the 2 is used as a hinge point to adapt the strictness of the filter. The value was chosen as a logical follow up from the absolute filter (which uses 2 as its filter value) as well as based on its statistical relevance: covering approximately 95% of the distribution, making values with z-scores larger than 2 outliers which should not be present in the data anymore. < 2), the filter will become In practice, for non-extreme agents (z-scoreagent i stricter the more dispersed the data received is. As a result, agents will be less tolerant and will reduce the range of values they consider for their update. This impact will be especially notable for agents already standing close to the mean, where the increased strictness will be at its strongest. On the other hand, for ≥ 2), the filter will become more tolerant in cases of extreme agents (z-scoreagent i flat distribution with the assumption that the distribution’s dispersion is in part the work of the agent’s own value. This will result in the agent considering more values for its update, enabling it to come closer to the mean (even if influenced by disruptive agents). Once the filter value is set for each agent, they can then proceed to the Dynamic Filtering, assessing which values will contribute to the update. Values are filtered out if their z-scores are higher than the dynamic filter value. Selected values can be understood as:
(10) Si [t] = Fi z-scorei (Fi ) < z-scorefi ilter (Fi ) [t] meaning that the only values the node will accept will be closer to the mean (centre of the cluster) than its own value. Following these two filtering steps, each agent will Update its value based the average difference between the selected values (Si [t]) and its own past value (oi [t − 1]). The extent of the update will be tuned by the learning rate of the consensus η, here set to be 0.618 (from the golden ration 1.61803, removing the 1 as learning rate normally have a range of 0–1). This function is expressed as: |Si |
oi [t + 1] = oi [t] + ·
3.2
1 (oi [t] − sj [t]) |Si | j=1
(11)
Convergence Metric
To visualise the opinion trajectories of the agents, a simple metric assessing convergence (CM , Convergence Metric) was implemented. At each time step, the values of the non-disruptive nodes are tabulated. The difference between all of these values is assessed and their absolute values are summed up. This total difference (T D, Total Difference) value is then normalised with respect to the initial difference calculated at the first time-step, before any of the protocol’s updates, TD[t] (12) CM[t] = TD[0] TD[t] =
V V k=1 m=1
(|ok [t] − om [t]|)
(13)
456
A. Bouis et al.
where ok are agents’ opinions ok ∈ Ok for k ∈ 1, V and om are agents’ opinions om ∈ Om such that between the two loops k and m, all agent opinions are compared and their absolute difference tabulated.
4
Analysis
In this section, the convergence of the consensus protocol is proven by assessing the changing (converging) boundedness of cooperative agents at each time step. Recalling the update of the protocol from Eq. 11, which can be rewritten as: 1 T T Si [t] (14) oi [t + 1] = oi [t] + · 1Si · oi [t] · 1Si − |Si | This equation can be rearranged as follows:
oi [t + 1] = oi [t] + · 1TSi · oi [t] · 1TSi − μSi [t]
(15)
where μSi [t] is the local mean of agent i, that is, the mean of the filtered values Si [t] at time t. It should be noted that μSi [t] is not necessarily equal to μi [t]as extremist values are filtered out of the distribution. μSi [t] =
1 · Si [t] |Si |
(16)
Given this rearranging, the following statement can be made: lim oi [t] → μSi [t]
t→∞
(17)
However, as established by the protocol rules, the values Si [t] change at each time step, and so will the local mean. The local mean will in turn evolve to a global mean M [t] as the agents communicate with each other. The global mean will also vary slightly as the agents and the local mean of the values received change from time step to time step. Nonetheless, the global mean will remain stable (or see only small oscillations) due to it being a weighted mean of the local means, in addition to being a function of the network topology and of the initial state values. lim μSi [t] → M [t]
(18)
∴ lim oi [t] → M [t]
(19)
t→∞
t→∞
This can be rearranged such that: ∴ lim (oi [t] − M [t]) → 0 t→∞
(20)
Which can be understood in terms of the bounds as saying that: oi [t + Δ] − M [t + Δ] ≤ oi [t] − M [t]
(21)
Tolerance-Based Resilient Consensus
457
where t + Δ > t as Δ > 0 is a positive non-zero value representing a time delta. Note here that although continuous time is used with Δ, the consensus protocol operates in a discrete synchronous manner. On the basis that all agents send (and receive) their opinions at the same time and at the same frequency, time can be counted in a discrete fashion with time steps. As such, Δ can also be viewed being a non-zero number of time steps. When looking at the state values o[t] of all the cooperative agents: max (oi [t + Δ]) ≤ max (oi [t])
(22)
min (oi [t + Δ]) ≥ min (oi [t])
(23)
And Such that if r is the range of opinions of compliant nodes: r[t] = max (o[t]) − min (o[t]) r[t + Δ] = max (o[t + Δ]) − min (o[t + Δ]) r[t + Δ] ≤ r[t]
(24)
This means that we will see a decreasing range of agents’ opinions over time as the agents converge. While it is known that the agents will tend towards their local mean, as a result of the filtering mean-based consensus, the “pull” or attraction towards the local mean may lead an agent’s opinion further away from the global mean. However, as long as the sufficient and necessary conditions for the network are met ((F+1, F+1)-robustness as described by LeBlanc in [4]), this local mean will turn converge to global mean.
5
Results
We consider here two scenarios, one for a multi agent network with N = 7 agents and the other with N=15 agents, where the disruptive agents are D = 1, 7 for the 7-nodes networks, and D = 4, 12 for the 15-nodes network. The 7-node network is (3,3)-robust while the 15-node is (5,1)-robust (in addition to being (4,2)- and (3,2)-robust). Taking for all agents (compliant and disruptive) initial values from a normal distribution μ = 8, σ = 7, the dynamical evolution of the opinions is shown for the two networks. Note that disruptive agents become active from time step 2 onward and keep the same node identity during the entire simulation. The nodes also do not recover and are considered to remain disruptive for the extent of the run. To assess in more detail the results of the convergence, a convergence metric is calculated. This metric represents the normalised difference between all of the compliant nodes (described in more detail in Sect. 3.2). The results of the simulation of the 7-node network are shown in Fig. 2. The first column (Fig. 2 (a) and (c)) shows the opinion trajectory with (top) and without (bottom) the protocol. The second column shows the network topology in Fig. 2 (b) and the convergence metric in Fig. 2 (d) (note that the Protocol metric disappears after 18 time steps as it becomes too small < 10−6 to plot).
458
A. Bouis et al.
Fig. 2. Comparing the result of the Clusters Consensus Protocol vs No Protocol for the 7-Node (3,3)-robust Network with 2 Disruptive Agents. On the left, the Opinion Trajectory of the nodes over time is shown. On the top right is the network diagram (disruptive agents shown in Blue), and on the bottom right is the Convergence Metric of the Compliant (non-Disruptive) Nodes calculated as the normalised difference between Compliant Nodes
The disruptive behaviours are a sine wave with low period and varying amplitude for node 1 and a sine wave with a large period and a smaller amplitude for node 7. The initial conditions of all the nodes, along with the behaviours and parameters of the disruptive nodes are the same for both the Protocol and NonProtocol scenarios. As observed in Fig. 2, when the protocol is implemented, a fast consensus (under 15 time steps) is achieved despite two disruptive nodes. The protocol is also resilient to the growing amplitude of the sine curve, as pictured from the dynamic evolution following time step 10. The convergence metric expresses the decreasing difference between the compliant nodes. On the other hand, a lack of convergence is seen when no consensus is implemented. Although compliant nodes converge towards one another, their state value is highly impacted by the fluctuations of the disruptive nodes. The results of the simulation of the 15-node network are shown in Fig. 3. The first column shows the opinion trajectories in Fig. 3 (a) & (c), the top right figure (b) shows the network topology, and the bottom right figure (d) shows
Tolerance-Based Resilient Consensus
459
Fig. 3. Comparing the result of the Clusters Consensus Protocol vs No Protocol for the 15-Node (3,2)-robust Network with 2 Disruptive Agents. On the left, the Opinion Trajectory of the nodes over time is shown. On the top right is the network diagram (disruptive agents shown in Blue), and on the bottom right is the Convergence Metric of the Compliant (non-Disruptive) Nodes calculated as the normalised difference between Compliant Nodes
the convergence metrics. The disruptive behaviours tested are a sine wave with a large period for node 4 and a noise function for node 12, with the same parameters being implemented for the Protocol and Non-Protocol cases. Once again, despite the presence of disruptive nodes, the system reaches consensus within 20 time steps and remains stable (unaffected by the disruptions) thereafter. Much like for the 7-node system, in the absence of consensus protocol the network is observed to fail to reach convergence.
6
Conclusion
Social dynamics frameworks can be used to engineer disruption-tolerant consensus protocols requiring no knowledge of either network topology or the number of disruptive agents in the system in contrast to MSR-type Methods. The proposed protocol, the Clusters Consensus Protocol, was shown to efficiently reach consensus in spite of disruptive agents. Through its inversion of the Deffuant
460
A. Bouis et al.
dynamics with emotion, disruptive behaviour was ignored across a range of test cases, enabling compliant agents to reach consensus in a short number of time steps, and thus confirming the protocol as disruption-tolerant.
References 1. Ding, D., Han, Q.-L., Xiang, Y., Ge, X., Zhang, X.-M.: A survey on security control and attack detection for industrial cyber-physical systems. Neurocomputing 275, 1674–1683 (2018) 2. Sakavalas, D., Tseng, L.: Network topology and fault-tolerant consensus. Synth. Lect. Distrib. Comput. Theory 9(1), 1–151 (2019) 3. Al-Kuwaiti, M., Kyriakopoulos, N., Hussein, S.: Network dependability, faulttolerance, reliability, security, survivability: a framework for comparative analysis. In: 2006 International Conference on Computer Engineering and Systems, pp. 282-287. IEEE (2006) 4. LeBlanc, H.J., Zhang, H., Koutsoukos, X., Sundaram, S.: Resilient asymptotic consensus in robust networks. IEEE J. Sel. Areas Commun. 31(4), 766–781 (2013) 5. Su, L., Vaidya, N.H.: Reaching approximate byzantine consensus with multi-hop communication. Inf. Comput. 255, 352–368 (2017) 6. Deffuant, G., Jager, W., Moss, W.: Dialogues concerning a (possibly) new science. J. Artif. Soc. Soc. Simul. 9(1) (2006) 7. Mathias, J.-D., Huet, S., Deffuant, G.: Bounded confidence model with fixed uncertainties and extremists: the opinions can keep fluctuating indefinitely. J. Artif. Soc. Soc. Simul. 19(1), 6 (2016) 8. Deffuant, G., Amblard, F., Weisbuch, G., Faure, T.: How can extremism prevail? a study based on the relative agreement interaction model. J. Artif. Soc. Soc. Simul. 5(4) (2002) 9. Guarino, S., Trino, N., Celestini, A., Chessa, A., Riotta, G.: Characterizing networks of propaganda on twitter: a case study. Appli. Netw. Sci. 5(1), 1–22 (2020) 10. Sobkowicz, P.: Extremism without extremists: Deffuant model with emotions. Front. Phys. 3, 17 (2015) 11. Araque, O., Iglesias, C.A.: An approach for radicalization detection based on emotion signals and semantic similarity. IEEE Access 8, 17877–17891 (2020) 12. Berghel, H.: Malice domestic: the Cambridge analytica dystopia. Computer 51(05), 84–89 (2018) 13. Pierri, F., Artoni, A., Ceri, S.: Investigating Italian disinformation spreading on Twitter in the context of 2019 European elections. PLoS ONE 15(1), e0227821 (2020) 14. Castellano, C., Fortunato, S., Loreto, V.: Statistical physics of social dynamics. Rev. Mod. Phys. 81(2), 591 (2009) 15. Xie, J., Sreenivasan, S., Korniss, G., Zhang, W., Lim, C., Szymanski, B.K.: Social consensus through the influence of committed minorities. Phys. Rev. E 84(1), 011130 (2011) 16. Ferrari, R.M., Teixeira, A.M.: Safety, Security and Privacy for Cyber-Physical Systems.Springer (2021) 17. Shang, Y.: Median-based resilient consensus over time-varying random networks. IEEE Trans. Circuits Syst. II Express Briefs 69(3), 1203–1207 (2021)
Higher-Order Temporal Network Prediction Mathieu Jung-Muller, Alberto Ceria, and Huijuan Wang(B) Delft University of Technology, Mekelweg 4, 2628 CD Delft, Netherlands [email protected]
Abstract. A social interaction (so-called higher-order event/ interaction) can be regarded as the activation of the hyperlink among the corresponding individuals. Social interactions can be, thus, represented as higher-order temporal networks, that record the higher-order events occurring at each time step over time. The prediction of higherorder interactions is usually overlooked in traditional temporal network prediction methods, where a higher-order interaction is regarded as a set of pairwise interactions. We propose a memory-based model that predicts the higher-order temporal network (or events) one step ahead, based on the network observed in the past and a baseline utilizing pairwise temporal network prediction method. In eight real-world networks, we find that our model consistently outperforms the baseline. Importantly, our model reveals how past interactions of the target hyperlink and different types of hyperlinks that overlap with the target hyperlinks contribute to the prediction of the activation of the target link in the future.
Keywords: higher-order network prediction · network memory
1
· temporal network · network
Introduction
Temporal networks have been used to represent complex systems with timevarying network topology, where a link between two nodes is activated only when the node pair interacts [13,20]. This classic temporal network presentation assumes interactions to be pairwise. Social contacts/interactions have been mostly studied as pairwise temporal networks. However, contacts/interactions could be beyond pairwise [2,3]. Individuals may interact in groups [22]. A collaboration in scientific paper may engage more than two authors. Such interactions that involve an arbitrary number of nodes are called higher-order interactions/events. Social contacts are thus better represented by higher-order temporal networks. The classic temporal network prediction problem aims to predict pairwise contacts one time step ahead based on the network observed in the past. Predicting a temporal network in the future enables better forecast and mitigation of the spread of epidemics or misinformation on the network. This prediction c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 461–472, 2024. https://doi.org/10.1007/978-3-031-53503-1_38
462
M. Jung-Muller et al.
problem is also equivalent to problems in recommender systems: e.g., predicting which user will purchase which product, or which individuals will become acquaintances at the next time step [1,18]. Methods have been proposed for pairwise temporal network prediction. Some rely on network embeddings: nodes are represented as points in a low dimensional space where connected nodes are supposed to be closer in this embedding space [25]. Alternatively, deep learning methods have also been proposed [15], for instance, using LSTM methods [9] or adversarial networks [8]. However, these methods are at the expense of high computational costs and are limited in providing insights regarding which network mechanisms enable network prediction. Methods have also been proposed to predict whether a set of nodes will have at least one group interaction in the future [4,16,21] and when the first group interaction among these nodes occurs [17]. In this paper, we aim to predict higher-order temporal networks, i.e., higherorder interactions, one time step ahead, based on the higher-order temporal network observed in the past of a duration L, and to understand what network properties and which types of previous interactions enable the prediction. Firstly, we explore the memory property of higher-order temporal networks, i.e., to what extent the network topology remains similar over time. Furthermore, we propose a memory-based model to solve the prediction problem utilizing the memory property observed. This model is a generalization to higher-order of the pairwise model proposed in [26]. Our model assumes that the activity (interacting or not) of a group at the next time step is influenced by the past activity of this target group and of other groups that form a subset or a superset of the target group. Furthermore, the influence of recent events is considered more impactful than the influence of older events. In the prediction problem, we assume the total number of events of each group size (order) at the prediction time step is known, and the groups, each of which interact at least once in the network (in the past or future), are known. These assumptions aim to simplify the problem. Beyond, the total number of interactions of each order could be influenced by factors like weather and holiday other than the network observed in the past. The latter assumption means that group friendship is known and we aim to predict which groups with group friendship interact at the next time step. We also propose a baseline model that uses a memory-based pairwise temporal network prediction method [26]: it considers the higher-order temporal network observed in the past as a pairwise temporal network, predicts pairwise temporal network in the next time step and deduce higher-order interactions from the predicted pairwise interactions at the same prediction time step. Our model consistently outperforms this baseline, as evaluated in eight real-world physical contact networks. We find that the past activity of the target group is the most important factor for the prediction. Moreover, the past activity of groups of a large size has, in general, a lower impact on the prediction of events of the target group than groups of a small size.
Higher-Order Temporal Network Prediction
2
463
Network Representation
A pairwise temporal network G measured at discrete times can be represented as a sequence of network snapshots G = {G1 , G2 , ..., GT }, where T is the duration of the observation window, Gt = (V, Et ) is the snapshot at time step t with V and Et being the set of nodes and interactions, respectively. If nodes a and b have a contact at time step t, then (a, b) ∈ Et . Here, we assume all snapshots share the same set of nodes V . The set of links in the time-aggregated network is defined t=T as E = t=1 Et . A pair of nodes is connected with a link in the aggregated network if at least one contact occurs between them in the temporal network. The temporal connection or activity of link i over time can be represented by a T dimension vector xi whose element is xi (t), where t ∈ [1, T ], such that xi (t) = 1 when link i has a contact at time step t and xi (t) = 0 if no contact occurs at t. A temporal network can thus be equivalently represented by its aggregated network, where each link i is further associated with its activity time series xi . Social interactions, which may involve more than two individuals, can be more accurately represented as a higher-order temporal network H, which is a sequence of network snapshots H = {H1 , ..., HT }, where Ht = (V, Et ) is the snapshot at time step t with V being the set of nodes shared by all snapshots and Et the set of hyperlinks that are activated at time step t. The activation of a hyperlink (u1 , ..., ud ) at time step t corresponds to a group interaction among nodes u1 , ..., ud at time step t. The hyperlink (u1 , ..., ud ) active at time step t is called an event or interaction and its size is d. If h1 ⊂ h2 , i.e., h1 is included in h2 , of hyperwe call h2 a super-hyperlink of h1 and h1 a sub-hyperlink of h2 . The set t=T links in the higher-order time-aggregated network is defined as E = t=1 Et . A hyperlink belongs to E if it is activated at least once in the temporal network. A higher-order temporal network can thus be equivalently represented by its higher-order aggregated network, where each hyperlink i is further associated with its activity time series xi .
3
Datasets
To design and evaluate our network prediction methods, we consider eight empirical physical contact networks from the SocioPatterns project (sociopatterns.org). They are collections of face-to-face interactions at a distance smaller than 2 m, in social contexts such as study places (Highschool2012 [10], Highschool2013 [19], Primaryschool [11,23]), conferences (SFHH Conference [12], Hypertext2009 [14]), workplaces (Hospital [24], Workplace [12]) or an art gallery (Science Gallery [14]). These face-to-face interactions are recorded as a set of pairwise interactions. Based on them, group interactions are deduced by pro moting every fully-connected clique of d2 contacts happening at the same time step to an event of size d occurring at this time step. Since a clique of order d contains all its sub-cliques of order d < d, only the maximal clique is promoted to a higher-order event, whereas sub-cliques are ignored. This method has been used in [5,7]. The datasets are also preprocessed by removing nodes not connected to
464
M. Jung-Muller et al.
the largest connected component in the pairwise time-aggregated network and long periods of inactivity, when no event occurs in the network. Such periods usually correspond, e.g., to night and weekends, and are recognized as outliers. This corresponds to the preprocessing done in [6,7]. The total number of events of each order in our datasets, after preprocessing, is shown in Table 1. Table 1. Number of events of every order for every dataset after preprocessing. Dataset
Order 2 Order 3 Order 4 Order 5 Order 6+
Science Gallery
12770
77
7
0
Hospital
25487
2265
81
2
0
Highschool 2012
40671
1339
91
4
0
Highschool 2013
163973
7475
576
7
0
Primaryschool
97132
9262
471
12
0
Workplace
71529
2277
14
0
0
Hypertext 2009
18120
874
31
12
4
5057
617
457
199
SFHH Conference 48175
4
1421
Network Memory Property
Zou et al. observed properties of time-decaying memory in pairwise temporal networks. This means that different snapshots of the network share certain similarities. These properties were used to better predict pairwise interactions [26]. Inspired by this, we also want to know whether higher-order temporal networks have memory at different orders and whether this property can be used to predict higher-order events in temporal networks. Therefore, we examine the Jaccard similarity of the network at two different time steps and for every order. The Jaccard similarity measures how similar two given sets are by taking the ratio of the size of the intersection set over the size |E n ∩E n | of the union set. In our case, we compute the Jaccard similarity |Etn1 ∪Etn2 | for t1
t2
every order n, between the set Etn1 of n-hyperlinks (hyperlinks of order n) active at a time step t1 and the set Etn2 of n-hyperlinks active at a time step t2 , called Etn2 . The difference t2 − t1 is called the time lag. As shown in Fig. 1, the similarity decays as the time lag increases for orders 2, 3, and 4, respectively, in all datasets. The time-decaying memory at order 5 has been observed only in SFHH, the only network that has a non-negligible number of order 5 events, as shown in Table 1.
Higher-Order Temporal Network Prediction
465
Fig. 1. Jaccard similarities of a network at two time steps for each order as a function of the time lag in eight real-world physical contact networks.
5 5.1
Models Baseline
We propose a baseline for higher-order temporal network prediction utilizing the following pairwise network prediction model, called Self-Driven SD model, proposed in [26]. The SD model is a memory-based model that predicts a pairwise link’s activity at the next time step based on its past activity. The SD model estimates the tendency wj (t + 1) for each link j to be active at time t + 1 as: wj (t + 1) =
k=t
xj (k) exp−τ (t−k) ,
k=t−L+1
where t+1 is the prediction time step, L is the length of the past observation used for the prediction, τ is the exponential decay factor, and xj (k) is the activation state of link j at time k: xj (k) = 1 if the link j is active at time k and xj (k) = 0 otherwise. The activation tendency is computed for each link in the pairwise aggregated network, which is given. Given the number of pairwise contacts n2t+1 occurring at t + 1, the n2t+1 links with the highest activation tendency at t + 1 will be predicted to have contacts at t + 1.
466
M. Jung-Muller et al.
We propose a baseline model that firstly considers the higher-order temporal network observed in the past of duration L as pairwise temporal network, then applies the pairwise memory-based model to predict pairwise interactions at the prediction step, and deduces higher-order interactions from the predicted pairwise interactions, using the same method that promotes interactions that form a clique to a higher-order event (see Sect. 3). This set of higher-order interactions is considered the prediction made by the baseline model. 5.2
Generalized Model
The time-decaying memory is also observed at different orders in the higherorder temporal networks. This motivate us to generalize the SD model for higherorder network prediction. The essence of the generalized model is that the future activity of a hyperlink should be dependent on its past activity. Furthermore, it has been shown that events of different orders that occur close in time tend to overlap in component nodes [7]. Hence, the activity of a hyperlink is also supposed to be dependent on the past activity of its sub-hyperlinks and superhyperlinks. Finally, recent events should have more influence than older events, based on the observed time-decaying memory. Therefore, we propose the activation tendency of a hyperlink j at a prediction time stept + 1 as: wj (t + 1) =
k=t
cdi dj xi (k) exp−τ (t−k) ,
k=t−L+1 i∈Sj
where L is the length in time of the network observed in the past used for the prediction, τ is the exponential decay factor, Sj is the set of hyperlinks that are either sub-hyperlinks or super-hyperlinks of j, and xi (k) is the activation state of hyperlink i at time k. cdi dj is the coefficient of cross-order influence, for which di is the size of hyperlink i and dj is the size of hyperlink j. For instance, c32 is the coefficient associated with the influence of the activation of a 3-hyperlink on the activation of one of its sub-hyperlinks of size 2. We put cdd = 1 for any arbitrary hyperlink size d. Different sub-models of our general model are obtained by varying the values of the cross-order coefficients cd1 d2 for d1 = d2 . The activation tendency is computed for each hyperlink in the higher-order aggregated network, which is given in the prediction problem. Given the number not+1 of events of each order o at the prediction step t + 1, the not+1 hyperlinks of order o with the highest activation tendency at t + 1 are predicted to be active.
6 6.1
Model Evaluation Network Prediction Quality
We aim to predict the higher-order temporal network at time step t + 1 based on the network observed between t − L + 1 and t. To evaluate the proposed
Higher-Order Temporal Network Prediction
467
prediction methods, every time step within [L + 1, T ] is chosen as a possible prediction step, i.e., t + 1 ∈ [L + 1, T ], where T is the global lifespan of each empirical network. The prediction quality of a given model for events of an arbitrary order is evaluated via prediction accuracy: the ratio between the total number of true positives (correctly predicted events) over all possible prediction steps [L + 1, T ] for that order and the total number of events of that order that occur within [L + 1, T ]. 6.2
Parameter Choice of the Generalized Model
Since the events of orders higher than 4 are few in number in real-world physical contact networks considered, we focus on the prediction of events of orders 2, 3, and 4, respectively based on the previous activities of events of orders 2, 3, and 4. For every order (e.g., order 3), we make its associated pair of crossorder coefficients (e.g.,c23 , c43 ) take all possible values in {0.0, 0.1, ..., 1.0} × {0.0, 0.1, ..., 1.0}. Cross-order coefficients larger than 1 or smaller than 0 lead to lower prediction quality in general and are, therefore, not considered. We choose the duration of the past network observation L = 30 for prediction, which is equivalent to 600 s in our real-world physical contact networks (the duration of each time step is 20ms). This choice was found by comparing the accuracy of the prediction for different values of L between 1 and T /2 and for different values of τ between 0.25 and 1. Because a small L means low computational complexity, we identify L=30 as the smallest L that does not lead to an evident drop in prediction quality compared with L = T /2. We compared the prediction performance for different values of the decay factor τ , where τ ∈ {0, 0.25, ..., 1.5}. When τ = 0, i.e., previous interactions contribute equally in predicting the connection tendency of a hyperlink independent of when these interactions occur, the model performs the worst in every order for all datasets. This is in line with the time-decaying memory we have observed. In general, the choice of τ ∈ [0.25, 1.5] has little influence in the performance. We will focus on the performance analysis of the generalized model in comparison with the baseline model when τ = 0.25 in this paper, since other choices of τ lead to the same observation. 6.3
Performance Analysis
For every order 2, 3, or 4, we compute the prediction accuracy obtained by our generalized model with any pair of cross-order coefficients in {0.0, 0.1, ..., 1.0} × {0.0, 0.1, ..., 1.0} and identify the best performance achieved by choosing the optimal coefficient pair. This best performance, referred as the prediction accuracy of the model is further compared with the prediction quality of the baseline. As shown in Table 2, our generalized model performs overall better than the baseline for the interaction prediction for any order and in each network. The outperformance is more evident at orders 3 and 4. For the prediction of events of a given order, the two corresponding crossorder coefficients of the generalized model affect the prediction accuracy, as
468
M. Jung-Muller et al.
Table 2. Prediction accuracy of the generalized model and the baseline (in parentheses) per order for every dataset. The prediction accuracy of the generalized model is in bold if it is larger than that of the baseline model . Dataset
Order 2
Science Gallery
0.33 (0.33) 0.23 (0.16) 0.57 (0.22)
Order 3
Order 4
Hospital
0.53 (0.52) 0.50 (0.32) 0.72 (0.17)
Highschool2012
0.56 (0.55) 0.50 (0.38) 0.67 (0.19)
Highschool2013
0.61 (0.61) 0.40 (0.34) 0.61 (0.36)
Primaryschool
0.32 (0.32) 0.19 (0.16) 0.33 (0.09)
Workplace
0.57 (0.56) 0.49 (0.30) 0.50 (0.07)
Hypertext2009
0.50 (0.50) 0.52 (0.34) 0.68 (0.10)
SFHH Conference 0.53 (0.52) 0.45 (0.38) 0.58 (0.39)
shown in Fig. 2 and 3. We will explore which coefficient ranges tend to lead to optimal prediction accuracy. This will enable us to understand how events of different types of hyperlinks (super- and sub-hyperlinks) in the past contribute to the prediction of the event of a target hyperlink. Take the prediction of events of order 3 as an example. Figure 2 shows that close to optimal prediction precision is obtained approximately when c23 ∈ {0.1, ..., 0.4} < c33 = 1. This means the interaction of a sub-hyperlink of order 2 in the past leads to a lower activation tendency of the target hyperlink of order 3 compared to the interaction of the target hyperlink occurring at the same time in the past. The influence of c43 on the prediction quality is not evident, likely due to the small number of events of order 4. For events of order 2, the prediction quality tends to be optimal when c32 , c42 ∈ {0.1, ..., 0.4}, though their influence on prediction quality is less evident, as partially shown in Fig. 3. For order 4 (see also Fig. 3), c24 and c34 affect the prediction quality evidently and c24 ∈ {0.1, ..., 0.4} and c34 ∈ {0.6, ..., 0.9} tend to give rise to close to optimal prediction precision. To achieve the optimal prediction quality, c24 < c34 < c44 = 1, and c23 < c33 . This means that the activation of a hyperlink that overlaps more with the target hyperlink in nodes implies a higher activation tendency of the target link in the future. In general, the activation of superand sub-hyperlinks all contribute to the activation of the target hyperlink in the future, since cross-order coefficients zero lead to worse prediction precision. However, the choice of the contribution of a super-hyperlink activation (i.e., cdi dj when di > dj ) in predicting the activity of target hyperlink j does not affect the prediction quality evidently, likely because of the relatively small number of activations of a super-hyperlink compared to the number of activations of the target link.
Higher-Order Temporal Network Prediction
469
Fig. 2. Precision (ratio of true positives) of predicting events of order 3 as a function of c23 , with c43 fixed, for all datasets.
470
M. Jung-Muller et al.
Fig. 3. Precision (ratio of true positives) of predicting events of order 2 or 4 as a function of one coefficient, with the other coefficient fixed, for two datasets.
7
Conclusion and Discussion
Here, we propose a network-based higher-order temporal network prediction model that estimates the activation tendency of each hyperlink at the next time step based on the past activity of this hyperlink and of its sub- and superhyperlinks. Events that occur earlier contribute less to the connection tendency. Our model was shown to perform consistently better than the baseline directly derived from a pairwise prediction method. Using the previous activities of the target link itself, its sub- and super-hyperlinks together enable better prediction precision than considering only the activity of the target link itself, which is though the most influential factor. A past event of a sub-hyperlink that overlap more with the target hyperlink in nodes suggests a higher activation tendency of the target link. The contributions of super-hyperlinks does not affect the prediction quality evidently, likely because of the relatively small number of events of super-hyperlinks in comparison with that of the target hyperlink. We have focused on higher-order social contact networks that are derived from measurement of pairwise interactions. These networks have the property that a hyperlink and its sub-hyperlink are never activated at the same time, a property shared by the higher-order network predicted by the baseline model. It would be interesting to explore the proposed methods and better methods to predict other types of higher-order temporal networks, that may not have this property nor the memory property. The baseline model could be improved by, e.g., using the total number of events of each order in the prediction step, which has already been utilized by our generalized model.
Higher-Order Temporal Network Prediction
471
Acknowlegement. We thank for the support of Netherlands Organisation for Scientific Research NWO (TOP Grant no. 612.001.802, project FORT-PORT no. KICH1.VE03.21.008) and NExTWORKx, a collaboration between TU Delft and KPN on future telecommunication networks.
References 1. Aleta, A., Tuninetti, M., Paolotti, D., Moreno, Y., Starnini, M.: Link prediction in multiplex networks via triadic closure. Phys. Rev. Res. 2(4), 042029 (2020) 2. Battiston, F.: Networks beyond pairwise interactions: structure and dynamics. Phys. Rep. 874, 1–92 (2020) 3. Battiston, F., et al.: The physics of higher-order interactions in complex systems. Nat. Phys. 17(10), 1093–1098 (2021) 4. Benson, A.R., Abebe, R., Schaub, M.T., Jadbabaie, A., Kleinberg, J.: Simplicial closure and higher-order link prediction. Proc. Natl. Acad. Sci. 115(48), E11221– E11230 (2018) 5. Cencetti, G., Battiston, F., Lepri, B., Karsai, M.: Temporal properties of higherorder interactions in social networks. Sci. Rep. 11(1), 7028 (2021) 6. Ceria, A., Havlin, S., Hanjalic, A., Wang, H.: Topological–temporal properties of evolving networks. J. Complex Netw. 10(5), cnac041 (2022) 7. Ceria, A., Wang, H.: Temporal-topological properties of higher-order evolving networks. Sci. Rep. 13(1), 5885 (2023) 8. Chen, J., Lin, X., Jia, C., Li, Y., Wu, Y., Zheng, H., Liu, Y.: Generative dynamic link prediction. Chaos: Interdi. J. Nonlinear Sci. 29(12), 123111 (2019) 9. Chen, J., et al.: E-LSTM-D: a deep learning framework for dynamic network link prediction. IEEE Trans. Syst. Man Cybernet. Syst. 51(6), 3699–3712 (2019) 10. Fournet, J., Barrat, A.: Contact patterns among high school students. PLoS ONE 9(9), e107878 (2014) 11. Gemmetto, V., Barrat, A., Cattuto, C.: Mitigation of infectious disease at school: targeted class closure vs school closure. BMC Infect. Dis. 14(1), 1–10 (2014) 12. G´enois, M., Barrat, A.: Can co-location be used as a proxy for face-to-face contacts? EPJ Data Sci. 7(1), 1–18 (2018) 13. Holme, P., Saram¨ aki, J.: Temporal networks. Phys. Rep. 519(3), 97–125 (2012) 14. Isella, L., Stehl´e, J., Barrat, A., Cattuto, C., Pinton, J.F., Van den Broeck, W.: What’s in a crowd? analysis of face-to-face behavioral networks. J. Theor. Biol. 271(1), 166–180 (2011) 15. Li, X., Du, N., Li, H., Li, K., Gao, J., Zhang, A.: A deep learning approach to link prediction in dynamic networks. In: Proceedings of the 2014 SIAM International Conference on Data Mining, pp. 289–297. SIAM (2014) 16. Liu, B., Yang, R., L¨ u, L.: Higher-order link prediction via local information. Chaos 1 33(8), 083108 (2023) 17. Liu, Y., Ma, J., Li, P.: Neural predicting higher-order patterns in temporal networks. In: Proceedings of the ACM Web Conference 2022, pp. 1340–1351 (2022) 18. L¨ u, L., Medo, M., Yeung, C.H., Zhang, Y.C., Zhang, Z.K., Zhou, T.: Recommender systems. Phys. Rep. 519(1), 1–49 (2012) 19. Mastrandrea, R., Fournet, J., Barrat, A.: Contact patterns in a high school: a comparison between data collected using wearable sensors, contact diaries and friendship surveys. PLoS ONE 10(9), e0136497 (2015) 20. Masuda, N., Lambiotte, R.: A guide to temporal networks. World Sci. (2016)
472
M. Jung-Muller et al.
21. Piaggesi, S., Panisson, A., Petri, G.: Effective higher-order link prediction and reconstruction from simplicial complex embeddings. In: Learning on Graphs Conference, pp. 55–1. PMLR (2022) 22. Sekara, V., Stopczynski, A., Lehmann, S.: Fundamental structures of dynamic social networks. Proc. Natl. Acad. Sci. 113(36), 9977–9982 (2016) 23. Stehl´e, J., et al.: High-resolution measurements of face-to-face contact patterns in a primary school. PLoS ONE 6(8), e23176 (2011) 24. Vanhems, P., et al.: Estimating potential infection transmission routes in hospital wards using wearable proximity sensors. PloS One 8(9), e73970 (2013) 25. Zhou, Y., Pei, Y., He, Y., Mo, J., Wang, J., Gao, N.: Dynamic graph link prediction by semantic evolution. In: ICC 2019-2019 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2019) 26. Zou, L., Wang, A., Wang, H.: Memory based temporal network prediction. In: Complex Networks and Their Applications XI: Proceedings of The Eleventh International Conference on Complex Networks and their Applications: COMPLEX NETWORKS 2022, vol. 2, pp. 661–673. Springer (2023). https://doi.org/10.1007/ 978-3-031-21131-7 51
Author Index
A Agarwal, Nitin 340, 401, 412 Aktas, Mehmet S. 156 Al Jurdi, Wissam 205 Alassad, Mustafa 340, 412 Anand, Vivek 251 Auten, Trevor 265
F Farahbakhsh, Reza 192 Fionda, Valeria 378
B Bosselut, Gregoire 427 Bou Abdo, Jacques 205 Bouchaud, Paul 131 Bouis, Agathe 449 Bouleimen, Azza 316 Bracy, Emerson 67
H Hendrikse, Sophie 41 Hendrikse, Sophie C. F. 53 Hosseini, Mojgan 91 Huang, Emily Chao-Hui 291
G Ghasemi, Abdorasoul 219 Giordano, Silvia 316, 354
I Islam, Shafiqul 180 Iyer, Shankar 301
C Carley, Kathleen M. 328 Carley, L. Richard 328 Ceria, Alberto 461 Chaudron, Jean-Baptiste 104 Clark, Ruaridh 449 Cresci, Stefano 316 Crespi, Noel 192 D de Meer, Hermann 219 de Melo, Lucas Eduardo Araújo Deinert, Mark 243 Demerjian, Jacques 205 Dhamo, Xhilda 427 Dovrolis, Constantine 251 Dray, Gérard 427 Dutta, Animesh 388 Duzen, Zafer 156 E Emeric, Auriant 143 Espinet, Xavier 243
J Janaqi, Stefan 427 Jin, Zhengyang 27 Jung-Muller, Mathieu 461
243
K Kalluçi, Eglantina 427 Kantz, Holger 219 Karastoyanova, Dimka 366 Kejriwal, Mayank 16 Kimura, Masahiro 3, 231 Knox, Catherine 180 Kovantsev, Anton 80 Kucharska, Wioleta 91 Kumano, Masahito 3, 231 L L’Her, Guillaume 243 Lassila, Henrik 67 Liu, Weiru 168 Livraga, Giovanni 277 Lowe, Christopher 449
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Cherifi et al. (Eds.): COMPLEX NETWORKS 2023, SCI 1144, pp. 473–474, 2024. https://doi.org/10.1007/978-3-031-53503-1
474
M Macdonald, Malcolm 449 Magué, Jean-Philippe 104 Makhoul, Abdallah 205 Matta, John 265 Mattera, Francesco 53 Mbila-Uma, Stella 412 McAreavey, Kevin 168 McConville, Ryan 168 Medhat, Ahmed 301 Mnassri, Khouloud 192 Mukeriia, Yelyzaveta 41 N Nadgir, Nitya 180 Namtirtha, Amrita 388 Nasirian, Sara 354 Nogara, Gianluca 354 Noor, Nahiyan Bin 401 Nwana, Lotenna 340 O Olzojevs, Artjoms 277 P Pagan, Nicolò 316 Perrey, Stephane 427 Phoa, Frederick Kin Hing 291 Polycarpou, Marios 251 Pramov, Aleksandar 251 R Rabb, Nicholas 180 Reveille, Coralie 427 Riveni, Mirela 156, 366
Author Index
S Sabo, Eduard 366 Saha, Nilanjana 388 Saito, Masaki 231 Schweikert, Amy 243 Shen, Ke 16 Sokoli, Arnisa 427 Spann, Billy 401 Sugawara, Toshiharu 116 T Takai, Chikashi 3 Toriumi, Fujio 116 Treur, Jan 41, 53, 67, 91 U Ueki, Shintaro 116 Umoga, Ifeanyichukwu 412 Urman, Aleksandra 316 V Victor, Chomel 143 Vigier, Denis 104 Viviani, Marco 277 Vrachimis, Stelios 251 W Wang, Huijuan 461 Wang, Lei 438 Wang, Wei 438 Wu, Yiwen 168 Y Yousefi, Niloofar Z Zhang, Shuyuan
401
438