290 98 14MB
English Pages 483 [484] Year 2023
Philosophical Studies Series
Francesca Mazzi Luciano Floridi Editors
The Ethics of Artificial Intelligence for the Sustainable Development Goals
Philosophical Studies Series Volume 152
Editor-in-Chief Mariarosaria Taddeo, Oxford Internet Institute University of Oxford Oxford, UK Advisory Editors Lynne Baker, Department of Philosophy University of Massachusetts Amherst, USA Stewart Cohen, Arizona State University Tempe, AZ, USA Radu Bogdan, Department of Philosophy Tulane University New Orleans, LA, USA Marian David, Karl-Franzens-Universität Graz, Austria John Fischer, University of California, Riverside Riverside, CA, USA Keith Lehrer, University Of Arizona Tucson, AZ, USA Denise Meyerson, Macquarie University Sydney, Australia Francois Recanati, Ecole Normale Supérieure Institut Jean Nicod Paris, France Mark Sainsbury, University of Texas at Austin Austin, TX, USA Barry Smith, State University of New York at Buffalo Buffalo, NY, USA Linda Zagzebski, Department of Philosophy University of Oklahoma Norman, OK, USA
Philosophical Studies Series aims to provide a forum for the best current research in contemporary philosophy broadly conceived, its methodologies, and applications. Since Wilfrid Sellars and Keith Lehrer founded the series in 1974, the book series has welcomed a wide variety of different approaches, and every effort is made to maintain this pluralism, not for its own sake, but in order to represent the many fruitful and illuminating ways of addressing philosophical questions and investigating related applications and disciplines. The book series is interested in classical topics of all branches of philosophy including, but not limited to: • • • • • • • •
Ethics Epistemology Logic Philosophy of language Philosophy of logic Philosophy of mind Philosophy of religion Philosophy of science
Special attention is paid to studies that focus on: • the interplay of empirical and philosophical viewpoints • the implications and consequences of conceptual phenomena for research as well as for society • philosophies of specific sciences, such as philosophy of biology, philosophy of chemistry, philosophy of computer science, philosophy of information, philosophy of neuroscience, philosophy of physics, or philosophy of technology; and • contributions to the formal (logical, set-theoretical, mathematical, information- theoretical, decision-theoretical, etc.) methodology of sciences. Likewise, the applications of conceptual and methodological investigations to applied sciences as well as social and technological phenomena are strongly encouraged. Philosophical Studies Series welcomes historically informed research, but privileges philosophical theories and the discussion of contemporary issues rather than purely scholarly investigations into the history of ideas or authors. Besides monographs, Philosophical Studies Series publishes thematically unified anthologies, selected papers from relevant conferences, and edited volumes with a well-defined topical focus inside the aim and scope of the book series. The contributions in the volumes are expected to be focused and structurally organized in accordance with the central theme(s), and are tied together by an editorial introduction. Volumes are completed by extensive bibliographies. The series discourages the submission of manuscripts that contain reprints of previous published material and/or manuscripts that are below 160 pages/88,000 words. For inquiries and submission of proposals authors can contact the editor-in-chief Mariarosaria Taddeo via: [email protected]
Francesca Mazzi • Luciano Floridi Editors
The Ethics of Artificial Intelligence for the Sustainable Development Goals
Editors Francesca Mazzi Saïd Business School University of Oxford Oxford, UK
Luciano Floridi Oxford Internet Institute University of Oxford Oxford, UK
ISSN 0921-8599 ISSN 2542-8349 (electronic) Philosophical Studies Series ISBN 978-3-031-21146-1 ISBN 978-3-031-21147-8 (eBook) https://doi.org/10.1007/978-3-031-21147-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents
Part I AIxSDGs: Theory and Governance Introduction: Understanding the Ethics of Artificial Intelligence for the Sustainable Development Goals �������������������������������������������������������� 3 Francesca Mazzi and Luciano Floridi in Support of the SDGs: Six Recurring Challenges and Related AI Opportunities Identified Through Use Cases������������������������������������������������ 9 Francesca Mazzi, Mariarosaria Taddeo, and Luciano Floridi Joined Up Thinking on How AI Can Contribute to the SDGs�������������������� 35 Geoff Mulgan A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community�������������������������������������������������������������������������������������������������� 43 Li Min Ong and Mark Findlay The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies�������������������������������������������������������������������������������� 65 B. Sirmacek, S. Gupta, F. Mallor, H. Azizpour, Y. Ban, H. Eivazi, H. Fang, F. Golzar, I. Leite, G. I. Melsion, K. Smith, F. Fuso Nerini, and R. Vinuesa Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced Inequalities in a Post-COVID World�������������������������������������� 97 Margaret A. Goralski and Tay Keong Tan Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI Applications�������������������������������������������������������������� 115 Kostina Prifti The Role of AI in SDG: An African Perspective ������������������������������������������ 133 Steve A. Adeshina and Oluwatomisin Aina
v
vi
Contents
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs): An Inclusive Democratized Low-Code Approach���������������� 145 Meng-Leong How, Sin-Mei Cheah, Yong Jiet Chan, Aik Cheow Khor, and Eunice Mei Ping Say Ethical AI: The European Approach to Achieving the SDGs Through AI ������������������������������������������������������������������������������������������������������ 167 Valeria Benedetti del Rio AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable Development Through Value Chains�������������������������� 183 Matthew Stephenson, Iza Lejarraga, Kira Matus, Yacob Mulugetta, Masaru Yarime, and James Zhan AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal Approaches������������������������������������������������������ 203 Sep Pashang and Olaf Weber Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder Partnerships in the Digital Age ������������������������������ 231 Marianna Capasso and Steven Umbrello Part II AIxSDGs: Existing and Potential Use Cases A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks in Using AI Algorithms to Accomplish SDG 16.9�������������� 253 Mirko Forti Socially Good AI Contributions for the Implementation of Sustainable Development in Mountain Communities Through an Inclusive Student-Engaged Learning Model���������������������������� 269 Tyler Lance Jaynes, Baktybek Abdrisaev, and Linda MacDonald Glenn Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact the Sustainable Development Goals �������������������� 291 Tomás Gabriel García-Micó and Migle Laukyte Smart Control of Drinking Water Grids Using IoT ������������������������������������ 305 Jalal Dziri and Tahar Ezzedine Algorithmic Art and Cultural Sustainability in the Museum Sector���������� 327 Giulia Taurino The Impact of Artificial Intelligence on Circular Value Creation for Sustainable Development Goals���������������������������������������������������������������� 347 Malahat Ghoreishi, Luke Treves, Roman Teplov, and Mikko Pynnönen Computer-Aided Corporate Sense-Making and Prioritization for SDGs������������������������������������������������������������������������������������������������������������ 365 Innar Liiv, Erkki Karo, and Ralf-Martin Soe
Contents
vii
Role of Artificial Intelligence in Advancing Sustainable Development Goals in the Agriculture Sector���������������������������������������������������������������������� 379 Soenke Ziesche, Swati Agarwal, Uday Nagaraju, Edson Prestes, and Naman Singha for Sustainable Agriculture and Rangeland Monitoring������������������������ 399 AI Natalia Efremova, James Conrad Foley, Alexey Unagaev, and Rebekah Karimi Artificial Neural Networks Predict Sustainable Development Goals Index ������������������������������������������������������������������������������������������������������ 423 Seyed-Hadi Mirghaderi Sailing the Data Sea to Advance Research on the Sustainable Development Goals������������������������������������������������������������������������������������������ 441 Andy Spezzatti, Elham Kheradmand, Kartik Gupta, Marie Peras, and Roxaneh Zaminpeyma An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)������������������������������������������������������������������������������������������������ 461 Shivam Gupta and Auriol Degbelo Index������������������������������������������������������������������������������������������������������������������ 485
Contributors
Baktybek Abdrisaev Department of History and Political Science, College of Humanities and Social Sciences, Utah Valley University, Orem, UT, USA Steve A. Adeshina Nile University of Nigeria, Abuja, Nigeria Swati Agarwal AI Policy Labs, London, UK CU-92, Pitampura, New Delhi, India Oluwatomisin Aina Nile University of Nigeria, Abuja, Nigeria H. Azizpour Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Stockholm, Sweden Y. Ban Division of Geoinformatics, KTH Royal Institute of Technology, Stockholm, Sweden Marianna Capasso Scuola Superiore Sant’Anna, Pisa, Italy Yong Jiet Chan Monash University, Melbourne, Australia Sin-Mei Cheah Singapore Management University, Singapore, Singapore Auriol Degbelo Institute Münster, Germany
of
Geoinformatics,
University
of
Münster,
Valeria Benedetti del Rio Baker McKenzie, Chicago, IL, USA Jalal Dziri Communication System Laboratory Sys’Com, National Engineering School of Tunis, University Tunis El Manar, BP, Tunis, Tunisia Natalia Efremova Queen Mary University London, London, UK H. Eivazi FLOW, Engineering Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden Tahar Ezzedine Communication System Laboratory Sys’Com, National Engineering School of Tunis, University Tunis El Manar, BP, Tunis, Tunisia ix
x
Contributors
H. Fang Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Stockholm, Sweden Mark Findlay Centre for AI & Data Governance, Yong Pung How School of Law, Singapore Management University, Singapore, Singapore Luciano Floridi Oxford Internet Institute, University of Oxford, Oxford, UK Department of Legal Studies, University of Bologna, Bologna, Italy James Conrad Foley DeepPlanet, Oxford, UK Mirko Forti Sant’Anna School of Advanced Studies, Pisa, Italy Tomás Gabriel García-Micó Pompeu Fabra University, Barcelona, Spain Malahat Ghoreishi LUT School of Business and Management, LUT University, Lappeenranta, Finland Linda MacDonald Glenn Alden March Bioethics Institute at Albany Medical College, Albany, NY, USA Center for Applied Values and Ethics in Advanced Technologies (CAVEAT), Crown College, University of California Santa Cruz, Santa Cruz, CA, USA F. Golzar Division of Energy Systems, Department of Energy Technology, KTH Royal Institute of Technology, Stockholm, Sweden Climate Action Centre, KTH Royal Institute of Technology, Stockholm, Sweden Margaret A. Goralski Quinnipiac University, Hamden, CT, USA Kartik Gupta University of Western Ontario, London, Canada Shivam Gupta Bonn Alliance for Sustainability Research, University of Bonn, Bonn, Germany Meng-Leong How The University of Newcastle, Australia, Callaghan, Australia Tyler Lance Jaynes Alden March Bioethics Institute at Albany Medical College, Albany, NY, USA Department of Philosophy & Humanities, College of Humanities and Social Sciences, Utah Valley University, Orem, UT, USA Rebekah Karimi Enonkishu Conservancy, Lemek, Kenya Erkki Karo Ragnar Nurkse Department of Innovation and Governance, Tallinn University of Technology, Tallinn, Estonia Elham Kheradmand University of Montreal, Montreal, Canada Aik Cheow Khor Monash University, Melbourne, Australia Migle Laukyte Pompeu Fabra University, Barcelona, Spain
Contributors
xi
I. Leite Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Stockholm, Sweden Iza Lejarraga Economic Counsellor, Development Centre, Organisation for Economic Co-operation and Development (OECD), Paris, France Innar Liiv School of Information Technology, Tallinn University of Technology, Tallinn, Estonia F. Mallor FLOW, Engineering Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden Kira Matus The Hong Kong University of Science and Technology (HKUST), New Territories, Hong Kong Francesca Mazzi Saïd Business School, University of Oxford, Oxford, UK G. I. Melsion Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Stockholm, Sweden Seyed-Hadi Mirghaderi Department of Management, School of Economics, Management, and Social Sciences, Shiraz University, Shiraz, Iran Geoff Mulgan UCL, London, UK Yacob Mulugetta Energy and Development Policy, University College London (UCL), UK Uday Nagaraju AI Policy Labs, London, UK F. Fuso Nerini Division of Energy Systems, Department of Energy Technology, KTH Royal Institute of Technology, Stockholm, Sweden Climate Action Centre, KTH Royal Institute of Technology, Stockholm, Sweden Li Min Ong Centre for AI & Data Governance, Yong Pung How School of Law, Singapore Management University, Singapore, Singapore Sep Pashang School of Environment, Resources and Sustainability, University of Waterloo, Waterloo, Canada Marie Peras AgroParisTech, Paris, France Edson Prestes AI Policy Labs, London, UK Informatics Institute, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil Kostina Prifti Erasmus School of Law; Jean Monet Centre of Excellence on Digital Governance, Erasmus University Rotterdam, Rotterdam, The Netherlands Mikko Pynnönen LUT School of Business and Management, LUT University, Lappeenranta, Finland Eunice Mei Ping Say Monash University, Melbourne, Australia
xii
Contributors
Naman Singha AI Policy Labs, London, UK Greater Noida, India B. Sirmacek Smart Cities, School of Creative Technologies, Saxion University of Applied Sciences, Enschede, The Netherlands K. Smith Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Stockholm, Sweden Ralf-Martin Soe FinEst Centre for Smart Cities, Tallinn University of Technology, Tallinn, Estonia Andy Spezzatti AI for Good Foundation, Geneva, Switzerland Matthew Stephenson Policy and Community Lead, International Trade and Investment, World Economic Forum (WEF), Geneva, Switzerland Mariarosaria Taddeo Oxford Internet Institute, University of Oxford, Oxford, UK Alan Turing Institute, British Library, London, UK Tay Keong Tan Radford University, Radford, VA, USA Giulia Taurino Institute Boston, MA, USA
for
Experiential
AI,
Northeastern
University,
Roman Teplov LUT School of Business and Management, LUT University, Lappeenranta, Finland Luke Treves LUT School of Business and Management, LUT University, Lappeenranta, Finland Steven Umbrello Delft University of Technology, Delft, the Netherlands Alexey Unagaev DeepPlanet, Oxford, UK R. Vinuesa FLOW, Engineering Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden Climate Action Centre, KTH Royal Institute of Technology, Stockholm, Sweden Olaf Weber School of Environment, Resources and Sustainability, University of Waterloo, Waterloo, Canada Masaru Yarime The Hong Kong University of Science and Technology (HKUST), Hong Kong Roxaneh Zaminpeyma McGill University, Montreal, Canada James Zhan Investment and Enterprise, United Nations Conference on Trade and Development (UNCTAD), Geneva, Switzerland Soenke Ziesche AI Policy Labs, London, UK B20 Malcha Marg, Delhi, India
Part I
AIxSDGs: Theory and Governance
Introduction: Understanding the Ethics of Artificial Intelligence for the Sustainable Development Goals Francesca Mazzi and Luciano Floridi
Abstract Artificial intelligence (AI) as a general-purpose technology has great potential for advancing the United Nations Sustainable Development Goals (SDGs). However, the AI×SDGs phenomenon is still in its infancy in terms of diffusion, analysis, and empirical evidence. Moreover, a scalable adoption of AI solutions to advance the achievement of the SDGs requires private and public actors to engage in coordinated actions that have been analysed only partially so far. This volume provides the first overview of the AI×SDGs phenomenon and its related challenges and opportunities. The first part of the book adopts a programmatic approach, discussing AI×SDGs at a theoretical level and from the perspectives of different stakeholders. The second part illustrates existing projects and potential new applications. Keywords Artificial intelligence · Sustainable Development Goals · AI for social good · AI for climate change · Ethical AI The idea of the present volume The ethics of Artificial Intelligence for the Sustainable Development Goals emerged in the context of the homonymous Oxford Initiative “AIxSDG”. The urge for the Initiative derived from the acknowledgement that projects that use AI to deliver socially beneficial outcomes are on the rise (Cowls et al. 2021), but they are not sufficiently studied, nor are their implications fully understood (Vinuesa et al. 2020). The goal of the Initiative was to advance knowledge of the AIxSDGs phenomenon to help policymaking in the area of sustainability by identifying global challenges that artificial intelligence (AI) can help tackle, and F. Mazzi (*) Saïd Business School, University of Oxford, Oxford, UK e-mail: [email protected] L. Floridi Oxford Internet Institute, University of Oxford, Oxford, UK Department of Legal Studies, University of Bologna, Bologna, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_1
3
4
F. Mazzi and L. Floridi
developing best practices and lessons learned from empirical evidence. In such a framework, the book’s contributors agreed to participate by submitting their research concerning AI and a socially oriented, human project (Floridi 2020). The volume focuses on three points: hope as a starting point, a vision as a goal to fulfil, and a process, as what makes it possible to implement the vision starting from the hope. The hope is that the development and use of AI may positively impact individuals, societies, and environments (Floridi 2019). This is what lies behind the idea of “AI for social good” (AI4SG). Such an idea has become popular within the AI community (Floridi et al. 2020). As a general-purpose technology, AI can solve many problems and perform various tasks. There are many applications of AI for social good. They encompass all sectors, and more become available daily (Floridi et al. 2020). However, it is well known that AI can also be overused or used for unethical purposes (King et al. 2020). This is why an ethical analysis is a critical element of AI4SG. Different stakeholders are promoting the integration of ethical requirements into AI applications: from private companies to governments of countries that are including AI in their national strategies. AI for social good cannot be inconsistent with the ethical framework guiding the design and evaluation of AI in general (Floridi et al. 2020). In particular, the application of the principle of beneficence is essential. It states that AI should benefit people and the natural world. Indeed, AI for social good should aim to deliver environmentally and socially sustainable outcomes. To adopt a programmatic approach, one must define those outcomes. Here, we come to the vision, which is to choose the Sustainable Development Goals (SDGs) as a benchmark to evaluate the social goodness of AI applications. The United Nations General Assembly set the SDGs in 2015 to integrate the economic, social, and environmental dimensions of sustainable development. They are priorities for socially beneficial action on which there is international consensus. Thus, they offer a sufficiently empirical and reasonably uncontroversial benchmark to evaluate the positive social impact of AI for social good globally. Using the SDGs to assess AI4SG applications (AI×SDGs) means equating AI4SG with AI that supports the SDGs (Cowls et al. 2021). Such an equation does not disregard that examples of socially good uses of AI are not limited to the realm of the SDGs (Cowls et al. 2021). However, the SDGs offer clear, well-defined, and shareable boundaries to identify positively what is socially good. Being internationally agreed goals for development, they represent the closest thing available to a humanity-wide consensus on what ought to be done to promote positive social change and the conservation of the natural environment (Cowls et al. 2021). The existing body of research on SDGs already includes studies and metrics on how to measure progress in attaining each of the 17 SDGs, and the 169 associated targets defined in the 2030 Agenda for Sustainable Development. These metrics can be applied to measure the impact of AI use cases to achieve the SDGs (Cowls et al. 2021). Moreover, AI projects across different SDGs can improve existing synergies and lead to new ones between projects addressing different SDGs. AI×SDGs enables better planning and resource allocation, once it becomes clear which SDGs are
Introduction: Understanding the Ethics of Artificial Intelligence for the Sustainable…
5
under-addressed and why (Cowls et al. 2021). And using the SDGs as a benchmark for AI applications creates a potential precedent for future priorities’ planning after and beyond 2030, a methodology for a dialogue between different countries. Having a hope and a vision, the question is how to move from the former to the latter. This is the third point covered by this volume: the actual processes implemented to deliver AI×SDGs. There are many ways to deliver AI applications that are socially and environmentally good. The choice of which routes to follow is crucial also because different alternatives could be littered with unanticipated failures, missed opportunities, or unwarranted interventions (Cowls et al. 2021). Finding the best approaches requires designing AI systems that consider many variables, including the supporting and surrounding environments (such as regulations, business models, and indexes) that maximise the benefits deriving from AI×SDGs, all of which require concerted actions. To this end, the book provides multiple perspectives on AI×SDGs, aiming to move from “what” to “how” concerning some of the ideas delineated by Floridi (2020) to favour the marriage between the green of the environment and the blue of the digital. The book is divided into two parts. Part I has a programmatic approach, discussing AI×SDGs at a theoretical level and in terms of governance. Chapter 1 provides a critical analysis of the topics analysed in the volume. Chapter 3 (Mulgan) and Chapter 4 (Ong and Findlay) introduce the topic by providing different perspectives on the concept of AI×SDGs. Mulgan discusses its potential, the problem of R & D misalignments, and the need for concerted actions to stimulate the adoption of AI×SDGs solutions. Findlay proposes a critical approach to the techno-optimistic narrative of AI for social good, highlighting the risks of it becoming a new type of green/ethics washing. Chapter 5 (Sirmacek et al.) illustrates the potential of AI for achieving healthy and sustainable societies, underlining the relationship between sustainable and smart cities and the achievements of other goals, such as addressing climate change. Chapter 6 (Goralski and Tan) highlights the need for policies and partnerships that foster AI to tackle the SDGs, in light of the positive impact that AI can have in he ight against the unsolved, interconnected challenges of poverty, healthcare, education, and inequalities. Chapter 7 (Prifti) provides an economic analysis of the phenomenon: it uses the doughnuts theory to evaluate to what extent AI can foster fair prosperity through a green (environmental and ecological) use of resources, in line with the principle of solidarity understood as the mutual care of relations with others, with the world, and with future generations (Floridi 2020). Chapter 8 (Adesinha and Aina) brings a regional perspective on the topic, describing the actual and the potential role of AI×SDGs in Africa, with a specific focus on SDGs 3 (good health and well-being) and 16 (peace, justice, and strong institutions). Chapter 9 (How et al.) advocates a user-friendly, low-code, and human- centric probabilistic strategy to achieve a democratic approach to AI, representing an opportunity in terms of education, awareness, and engagement, also in connection with more data exploration and human-centric insights. Chapter 10 (Benedetti del Rio) discusses the proposal of a European regulation on AI (AI Act), given the crucial role it may have in directing AI investments, and in functioning as an infraethics, i.e., an infrastructure of rules that facilitate or hinder the moral or immoral
6
F. Mazzi and L. Floridi
behaviour of the agents involved (Floridi 2020). As a result of a public entity’s action, regulation is essential to create a framework that facilitates coordination of the efforts of private entities, which can further advance the maximisation of AI×SDGs at a sectorial level. Chapter 11 (Stephenson et al.) contributes to such debate by delineating a three-part solution encompassing international cooperation, governmental policies, and opportunities for firms. Chapter 12 (Pashang and Weber) illustrates governance mechanisms in the financial sector. It focuses on ESG parameters and how AI can help make capitalism sustainable and fair, i.e., to produce wealth in a sustainable way (in terms of environmental impact) and to distribute it fairly (in terms of social equality) (Floridi 2020). Chapter 13 (Capasso and Umbrello) focuses on sustainable business models for big tech companies, illustrating social licenses as another potential mechanism of infraethics oriented towards facilitating the occurrence of what is morally good. Part II of the book focuses on existing and potential AI use cases to advance the SDGs. Chapter 14 (Forti) describes how AI can be used to provide legal identity, as the human project for the digital age, and a mature information society must include the “silent world” of those left out (Floridi 2020). Chapter 15 (Jaynes et al.) focuses on providing AI-powered learning tools to mountain communities to foster students’ participation and remove geographical barriers to education. Chapter 16 (Garcia-Mico and Laukyte) discusses the use of AI for gender equality in relation to medical data, as a way of increasing the prosperity of the whole society and all the people who belong to it, independently of their gender and of the environments in which they live. Chapter 17 (Dziri and Ezzedine) focuses on smart control of drinkable water to advance less mature information societies where potable water is not ordinary (Floridi 2020). Chapter 18 (Taurino) focuses on computational art and cultural sustainability in museums, highlighting how AI can help foster the symbiotic relationship of mutual benefit between environmental, artificial, cultural, and digital environments (Floridi 2020). Chapter 19 (Ghoreishi et al.) explores the use of AI to create circular value, in line with the idea that the link between capitalism and linear consumerism “can and must be severed, in favour of a new coordination between capitalism and the economy of caring for the world (that is, circular fostering)” (Floridi 2020). Chapter 20 (Liiv et al.) investigates the use of AI to integrate an SDG-related assessment in the corporate strategy, arguing that all stakeholders, including the corporate world, should share the responsibility to take care of the SDGs. Chapter 21 (Ziesche et al.) and Chapter 22 (Efremova et al.) investigate AI for agriculture. The former provides an overview of the SDGs targets related to agriculture and the challenges and opportunities concerning the adoption of cost- effective digital solutions through multi-stakeholder collaborations. The latter describes a case study that uses AI with Earth observation data for rangeland monitoring. Chapter 23 (Mirgadheri) illustrates the use of artificial neural networks to predict a Sustainable Development Goals index. Chapter 24 (Spezzati et al.) describes the idea of building a Sustainable Development Data Catalog. Both provide examples of how AI can contribute to and benefit from the creation of information, i.e., constitutive and constituent elements of the infosphere, where the SDGs as part of a human project can be achieved. Finally, Chapter 25 (Gupta and Degbelo)
Introduction: Understanding the Ethics of Artificial Intelligence for the Sustainable…
7
discusses the use of AI for Sustainable Cities (SDG11), providing an overview of the state of the art that includes gaps and areas for further research. Overall, the book seeks to provide the reader with an intellectually stimulating collection of perspectives and a wealth of information about AI×SDGs, while highlighting some of the many areas needing further research and action. The hope is that it may contribute to a robust foundation for further much-needed studies on AI×SDGs.
References Cowls, Josh, Andreas Tsamados, Mariarosaria Taddeo, and Luciano Floridi. 2021. A Definition, Benchmark and Database of AI for Social Good Initiatives. Nature Machine Intelligence 3 (2): 111–115. https://doi.org/10.1038/s42256-021-00296-0. Floridi, Luciano. 2019. What the Near Future of Artificial Intelligence Could Be. Philosophy & Technology 32 (1): 1–15. https://doi.org/10.1007/s13347-019-00345-y. ———. 2020. The Green and the Blue: A New Political Ontology for a Mature Information Society, SSRN Scholarly Paper ID 3831094. Rochester: Social Science Research Network. https://doi.org/10.2139/ssrn.3831094. Floridi, Luciano, Josh Cowls, Thomas C. King, and Mariarosaria Taddeo. 2020. How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics 26 (3): 1771–1796. https://doi.org/10.1007/s11948-020-00213-5. King, Thomas C., Nikita Aggarwal, Mariarosaria Taddeo, and Luciano Floridi. 2020. Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. Science and Engineering Ethics 26 (1): 89–120. https://doi.org/10.1007/s11948-018-00081-0. Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 233. https://doi.org/10.1038/s41467-019-14108-y.
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities Identified Through Use Cases Francesca Mazzi, Mariarosaria Taddeo, and Luciano Floridi
Abstract This chapter provides an overview of six topics related to governance, ethical, legal, and social implications of artificial intelligence (AI) for sustainable development goals (SDGs) initiatives. We identified six common challenges and related opportunities to mitigate such challenges, as referred to by the authors analysing the chapters provided in the book The Ethics of Artificial Intelligence for the Sustainable Development Goals. They are (1) governance and collaboration, (2) private investments and the role of big tech companies, (3) AI and communities, (4) AI and individuals, (5) jobs and skills, and (6) impact assessment. Keywords Artificial intelligence · Sustainability · AI for SDGs · Ethics · Good AI society
1 Introduction Artificial intelligence (AI) has a great potential to advance the United Nations Sustainable Development Goals (SDGs) (Cowls et al. 2021a). As a general-purpose technology, AI has many possible applications to the SDGs: broadly speaking, AI can be used for understanding problems, solution seeking, and decision-making (Ong and Findlay 2023). In many fields, and regarding specific SDGs targets, it can F. Mazzi Saïd Business School, University of Oxford, Oxford, UK M. Taddeo Oxford Internet Institute, University of Oxford, Oxford, UK Alan Turing Institute, British Library, London, UK L. Floridi (*) Oxford Internet Institute, University of Oxford, Oxford, UK Department of Legal Studies, University of Bologna, Bologna, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_2
9
10
F. Mazzi et al.
be argued that the use of AI represents the best practice, for AI methods and techniques can produce results quantitatively and/or qualitatively superior to those achieved by other means (Cowls et al. 2021a).1 For example, predictive modelling algorithms are useful to deal with energy and climate related challenges. For example, hybrid models based on the support vector regression (SVR) and particle swarm optimisation (PSO) can be used to predict precision energy usage from supplied data (Goudarzi et al. 2019). Overall, predictive algorithms with long- and short-term memory (an artificial recurrent neural network architecture used in predictive modelling) are instrumental when dealing with time-series data to make future predictions (Sirmacek et al. 2023) that can help with climate change. They have a memory capacity for both long- and short-term data periods and behave more robustly than the earlier mathematical models (Sirmacek et al. 2023). Similarly, generative adversarial networks (GANs) that learn deep representations without extensively annotated training data are one of the best options for generalisation capabilities and are widely used in smart cities (Sirmacek et al. 2023). For example, traffic event detection is an important and complex task in smart transportation modelling and management, and researchers developed GANs to perform detection (Chen et al. 2021). Another example is the Bayesian network technique for statistical data analysis, which allows visualising the relationships between data variables educing AI-augmented thinking that is useful when discussing AI and sustainability (Sirmacek et al. 2023). For example, Sierra et al. (2018) used a Bayesian approach to optimise social sustainability in infrastructure projects, for supporting sustainability-related decision-making. The topic of AI for SDGs comes from using AI for social good (Taddeo and Floridi 2021). The AI for social good movement aims to establish interdisciplinary partnerships centred around using AI applications to support in achieving SDGs targets (Tomašev et al. 2020). This area of research aims to harness the potential for good of AI while mitigating associated ethical challenges (Taddeo and Floridi 2018). The “potential” is, as described above, vast, with these technologies capable of supporting multiple SDGs across various sectors. It interests the public and the private sectors. Relevant literature discusses government’s readiness to employ AI for SDGs (Liengpunsakul 2021), existing AI for SDGs projects (Cowls et al. 2021a), conceptual and normative approaches to AI governance for a global digital ecosystem supportive of the UN Sustainable Development Goals (SDGs) (Gill and Germann 2021), and the role of AI in the construction of sustainable business models (Di Vaio et al. 2020) and in typical business challenges that might require conversion to meet SDGs-related standards, such as production and supply-chain disruption, inventory management, budget planning, and workforce management (Visvizi 2022), to name a few. However, the challenges accompanying AI development and deployment are similarly complex. As shown by Vinuesa et al. (2020), AI
Such superiority in terms of, for example, data processing shall be benchmarked against the environmental impact of using AI. 1
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
11
can be an enabler and an inhibitor of the SDGs. The use of AI is intimately linked to nonuniversal access to increasingly large data sets and the computing infrastructure required to use them (Visvizi 2022). Unethical outcomes may derive from the design, development, and deployment of AI (Cowls et al. 2020). The lack of a comprehensive regulation of AI aimed at mitigating unethical outcomes might pose risks to the achievement of the UN SDGs, for example, in relation to developing countries. The goal of zero poverty is threatened by the imperfect design and implementation of decision-making algorithms that have displayed evidence of bias, lack ethical governance, and limit transparency on the basis of their decisions, causing unfair outcomes and amplifying unequal access to finance (Truby 2020). The challenges and opportunities around AI are many, and they require debates around ethics, for AI must be treated as a normal technology, and the questions concerning ethics are and will always remain a human matter (Floridi 2021a). Scholars have called for all stakeholders, including governments, policymakers, industry, and academia, to contribute towards the development of AI to avoid such potential threats to ensure that ethical principles are embedded in AI applications that affect our everyday lives (Holzinger et al. 2021). Against this backdrop, the present paper aims to provide an overview of six recurring challenges (and related opportunities to mitigate them) of using AI in support of the SDGs. They were extrapolated from the volume: The Ethics of Artificial Intelligence for the Sustainable Development Goals (Springer, 2023). The categories are the following: (1) governance and collaboration, (2) private investments and the role of big tech companies, (3) AI and communities, (4) AI and individuals, (5) jobs and skills, and (6) impact assessment. Categories (1) and (2) focus on public and private actors, as the use of AI to advance the SDGs gained (and further requires) the attention of both stakeholders. Categories (3) and (4) concern the effects of adopting AI solutions from the perspectives of communities and individuals, respectively. (5) and (6) focus on two pragmatic aspects needed for the large-scale implementation of AI solutions. The paper describes and illustrates such categories as challenges and related opportunities to mitigate such challenges, providing examples. It identifies a “fil rouge” of such categorisation and discusses its limitations. It concludes by highlighting areas for future research.
2 Governance and Collaboration 2.1 The Challenges The implementation of AI for SDGs requires complex and coordinated actions. It is complex because the use of AI for SDGs influences and is influenced by multiple factors. The 17 SDGs and 169 associated targets are interconnected; therefore,
12
F. Mazzi et al.
fulfilling the 2030 Agenda will require coordination, measurement, and management concerning, among others, financial resources, knowledge, and technology (Pashang and Weber 2023). It is coordinated because government, industry, academia, and society must work together to reach the SDGs (Goralski and Keong Tan 2023). Actions also need to be coordinated at different levels. For example, different stakeholders should complement each other’s actions, for example, industry can provide innovative technology. At the same time, the public sector can direct such technology for a public interest purpose, such as poverty alleviation (Goralski and Keong Tan 2023). Coordination is also needed at an international level, as countries should cooperate to avoid duplication of R&D efforts in developing AI, and companies might need a system of incentives to do so, for absent incentives, only a few of them deliver solutions to promote peace, justice, and strong institution (Adeshina and Aina 2023). Therefore, national, sectoral, regional, or even global governance plays an essential role in fostering collaboration between different stakeholders. Regulation of AI can also represent a critical milestone to incentivise the private sector’s investments in AI. Regulating AI means establishing rules that can either be neutral, foster, or hinder the development of AI for SDGs. The draft EU legislation is the first of its kind, and it represents an example of the pros and cons of an AI regulation in relation to the SDGs. As underlined by Benedetti del Rio (2023), the draft EU legislation presents positive aspects (that will be exposed in the next paragraph, as opportunities) and negative aspects that might require further research and work to incentivise AI for SDGs. The author identified negative aspects as the auditability of the system, the missed topics, and a potentially paradoxical interpretation of human-centricity. The auditability of the system refers to the improbable event of achieving logs and descriptions of the reasoning that led from information to the inference of the following fact. This is because auditable AI means explicable and reverse-engineerable AI, which conflicts with the protection of proprietary rights over the same AI system (Benedetti del Rio 2023). However, it should be acknowledged that some of the suggested forms of auditing for AI do not necessarily infringe on proprietary rights (Mökander and Floridi 2021). The term “missed topics” refers to the absence of issues that would have been desirable in the legislation. For example, the lack of reference to energy efficiency or carbon emission budgets. It would have been advisable to include limitation on the emissions that can be put into the atmosphere in the whole process of projecting, designing, and realising an AI system, considering that attention to the climate crisis also requires collaboration and governance efforts (Cowls et al. 2021b; Benedetti del Rio 2023). However, one could question whether the AI legislation is the appropriate forum for such considerations. Indeed, one of the main challenges of the AI act is also the limitation of its scope. Considering that AI can be applied in all industries, sectoral legislation might compensate for the missed topics. However, this
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
13
creates further fragmentation of relevant law and hinders the desired “Brussels effect”2 that can facilitate international coordination. Finally, the interpretation of human-centricity can hinder the SDGs, for human- centricity establishes a prioritisation of humans over wildlife and other living entities which nonetheless represent 75% of the living surface of our planet (Benedetti del Rio 2023). The interpretation of concepts such as “human-centricity” implies theoretical justifications and, most of all, political willingness, and consensus, that require agreement on the philosophical ground at the international level (in Europe in relation to the AI Act, but possibly a more inclusive level of international agreement). Such a lack of a common understanding of human-centricity might represent a further obstacle in adopting AI for SDGs solutions.3
2.2 The Opportunities It is desirable to identify and discuss ideas for policies and regulations that point towards multi-stakeholder collaboration. This sub-section aims to provide an overview of three macro-opportunities highlighted by scholars in the field. The first opportunity focuses on instruments of international governance that could be created, for example, in the context of existing international organisations. Stephenson et al. (2023) envisages the creation of a Sustainable Technology Board that could be in the context of the G20 as a mechanism for coordination, cooperation, and scaling of sustainable technology solutions. Such Board would be, for example, responsible for the development of standards and guidelines concerning new technologies, to facilitate their sustainable adoption. This idea presents two main challenges: the complexity of international agreement on creating such a board, especially without political stability, and the risk of concentration of power absent an appropriate structure that guarantees a division of powers. He also hypothesises the development of a platform for cooperation, where policymakers, firms, experts, and civil society can identify needs, share both concerns and opportunities, and transparently discuss ways to implement sustainable technology solutions. The author hypothesises the creation of data trusts, and/or the adoption of typology for data, aimed at facilitating management and sharing. He advocates for the use of
The “Brussels effect” refers to the impact of the European regulation on other jurisdictions and the likelihood that they adopt similar norms. Such effect relates to the chronological anteriority of the European legislator in filling one legislative vacuum related to the digital space, for example, concerning data protection law in 2016 with the General Data Protection Regulation. 3 For example, the prevalent European views on human-centricity and AI do not necessarily coincide with the Chinese ones. None of the three dominant schools of Chinese philosophical thinking place human beings in a supreme position within the universe. On the Chinese interpretation of human-centricity and anthropomorphism: “Applying Ancient Philosophy to Artificial Intelligence”, available at https://www.noemamag.com/applying-ancient-chinese-philosophy-to-artificial-intelligence/ accessed 4.6.2022. 2
14
F. Mazzi et al.
homomorphic encryption4 to share data safely and securely, either as a complement or an alternative to data trusts (Stephenson et al. 2023). Finally, he suggests the public sector should (i) orient investment incentives to encourage the uptake of sustainable technology solutions; (ii) use performance-based regulation to balance flexibility with oversight, to protect societies from unfavourable outcomes; and (iii) ensure equivalency agreements on standards and certifications to create coordination between jurisdictions. The second opportunity concerns the regulation of AI. Benedetti del Rio (2023) identified aspects of the draft EU regulation that can positively impact the development of AI for SDGs. Specifically, the adoption of regulation is desirable because (1) it is a regional piece of legislation directly applicable in the legislative framework of all member states without additional adequacy measures and because (2) it could have an extraterritorial scope thanks to the “Brussels effect” (Benedetti del Rio 2023). This creates greater legal certainty at the international level that is necessary for AI for SDGs to flourish. Also, the prohibition of all AI systems and services that create an unacceptable risk for the rights and freedoms of the individuals involved contributes to creating international minimum standards (Benedetti del Rio 2023), and it responds to the ethical principle of non-maleficence, coherent with the seven essential factors for AI for social good (Cowls et al. 2020). Finally, the auditability of AI reasoning, the equity of potential outcomes, the human-centricity, the requirements for human-machine interface tools that allow human oversight of high-risk AI systems, and the centrality of human rights are other elements that respond to principles of AI ethics, a necessary requirement for AI for SDGs (Benedetti del Rio 2023). Therefore, a “Brussels effect” of such regulation might be desirable for those jurisdictions that agree with and share European values, for it can facilitate international coordination towards the development of AI for SDGs solutions. The third opportunity addresses the use of AI in governance mechanisms to promote the SDGs. For example, Adeshina and Aina (2023) describe the use of AI to achieve a high rule of law index, based on the following principles: accountability, just law, open government, and accessible and impartial justice. He reports the example of Rwanda, which had topped the list as the country in Africa with the best rule of law, and argues that digitising the court systems helped Rwanda achieved such result. Digitising court systems is connected to improving accessibility to civil justice, transparency, accountability, and reduction of delays in resolving cases and of corruption. He argues that using AI will help improve these factors, advancing SDG 16, peace, justice, and strong institutions, as well as other SDGs’ targets and indicators consequently. It should be noted that such a solution, like all those that concern the digitisation of the public sector, comes with two main challenges: the risk of surveillance, absent a solid data protection law that ensures citizens’ privacy, and a rise in the level of unemployment that could be compensated with adequate
Homomorphic encryption makes it possible to analyse encrypted data without revealing the data’s content (Stephenson et al. 2023). 4
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
15
policies and with the increase in demand for AI-related jobs, as discussed further in Sect. 6.
3 Private Investments and the Role of Big Tech Companies 3.1 The Challenges This section will focus on the role of big tech companies in relation to the SDGs, for their availability of data and their ability to influence political agenda, and on how the problem of investment misalignment, as defined by Mulgan (2023), affects such role. Although big tech companies have invested in initiatives and solutions for social good, the efforts are rather fragmented (Mulgan 2023). The position of big tech companies is increasingly predominant in terms of data availability, which raises multiple concerns. Their economic and political influence is likely to follow market benefits, which do not necessarily point towards the SDGs. This can be described as the “private investment misalignment”: global R&D directed towards the SDGs is currently of relatively low impact, non-systemic, and marginal. Contrarily, consistent investments are made in developing AI for commercial purposes and in relation to the military and security sectors (Mulgan 2023). As shown by the STRINGS project (Steering Research and Innovation for Global Goals), the degree of misalignment between global research and the SDGs is visible within nations and globally (Mulgan 2023). The geographical allocation of R&D towards the SDGs does not reflect the geographical areas with the highest needs for SDGs actions, considering that most of the investment is in the middle- and high-income countries, with 90% of the SDGs related to science, technology, and innovation work being published/patented in high- and upper-middle-income countries (Mulgan 2023). Such investments are also biased towards certain SDGs because, without correctors, the market favours investments towards the needs of people that can afford to buy final products or services, with little or no incentives to invest in non-profitable goals, like Goal 1, no poverty, and Goal 10, reduced inequalities (Mulgan 2023). AI is implemented to render more efficient linear models of production and consumption, which are not sustainable.5 Overall, it can be argued that existing R&D and business models do not invite the development of effective solutions for the developing world. A linear model of production and consumption has been dominating over the past one and a half century in the globe. “In the supply chain in this one-way model, the goods are manufactured from raw materials in production processes, sold, used, and subsequently at the end of its lifetime as the specific product is discarded as waste to landfill or incinerated. The raw materials are once extracted from the nature, usually discarded at the end of the use of a particular product. This model simply runs on a linear path and hence sometimes termed as linear model. Linear model does not support environmental sustainability and resource efficiency” (Ghosh 2020). 5
16
F. Mazzi et al.
This represents a challenge and among others a risk of further exploitation of low- and middle-income countries through an expansion of economic or technological dependencies (Ong and Findlay 2023). As argued by Capasso and Umbrello (2023), market-driven big tech corporations with their availability of data sets and a vast number of resources are able to influence political agenda and they provide services and entire infrastructures on which several actors and the whole economic ecosystem depend (Capasso and Umbrello 2023). The political relevance of big tech companies can be exemplified by the case reported by Adeshina and Aina (2023), concerning the use of digital technologies for elections in Africa. Digital infrastructures are used for registration, drone satellite images for monitoring electoral violence, and natural language processing (NLP) to analyse radio and social media data. These data can be used to identify trending electoral topics, fake news, social tensions, and misconceptions that cause conflicts among citizens (Adeshina and Aina 2023). The concerns regarding control and power of the digital means were worsened in the discussed case by the arguments between political parties on biometric identification systems and by the belief that the accreditation system was sabotaged before voting (Adeshina and Aina 2023). The issues of ownership and security of both platforms and data are crucial to achieve fairness of elections (Adeshina and Aina 2023) for absent transparent rules, companies might be economically incentivised to adopt a certain policy or not. Also, the availability of such meanings could play a crucial role in communicating SDGs-related information to the public. At the same time, the governing mechanisms of big tech companies are the object of debate. For example, Chomanski (2021) doubts that regulating the private sector is the optimal solution, and proposes action-guiding principles that could steer policy, such as the principle of Presumption of Liberty (PoL). According to this principle, the burden should be on the proponents of regulation to demonstrate that their proposed political solution will be an improvement over the status quo in relevant respects (Chomanski 2021). Such debate is outside the scope of the present chapter.
3.2 The Opportunities Different solutions and governance mechanisms could play a role in favouring an alignment of incentives for funders to invest in AI for SDGs (Mulgan 2023). We provide and discuss here some examples. Sustainable finance can be defined as finance for sustainability, for it makes explicit reference to the sustainability dimensions (in particular in line with the Sustainable Development Goals and the Paris Agreement) and to the sectors or activities that contribute positively to these dimensions (Migliorelli 2021). Specific instruments could include blended finance, government-backed incubators and accelerators, patient or concessional capital, funds and prizes, and public procurement (Stephenson et al. 2023). The opportunity of developing innovative
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
17
technology finance instruments and of updating regulatory frameworks to create “digital-friendly investment climates” (Stephenson 2020) is a first step needed to orient investment incentives to encourage the uptake of sustainable finance solutions. Over the past few years, the use of AI for sustainable finance has been increasingly employed to address the SDGs with two major approaches: institutional and societal AI for sustainable finance (Pashang and Weber 2023). Broadly described, institutional AI for sustainable finance is used for activities such as environmental, social, and governance (ESG) investing, while societal AI for sustainable finance supports underbanked and unbanked individuals through financial inclusion initiatives (Pashang and Weber 2023). AI can support sustainable finance at the regulatory level: it can be used to (re)design enhanced financial governance systems (Arner et al. 2020 as cited by Pashang and Weber 2023). However, AI for sustainable finance, including the use of the ESG, comes with at least two challenges: the lack of uniformity of reports and KPIs, which results in the fragmentation of types of impact assessments, which will be further analysed in Sect. 7. And the difficulties of a transition towards an environmentally sustainable economy have inevitable costs and opposing interests. To this end, it is necessary to implement policies, incentives, and correctors to accompany such a transition6 (Arent 2017). Capasso and Umbrello (2023) describe the concept of social licence to operate for big tech companies as a tool to have legitimacy from internal stakeholders and outside stakeholders and the greater community.7 A social licence allows for identifying a business model as a social entity that goes beyond economic and market considerations, and thus it is subject to public accountability and public control (Capasso and Umbrello 2023). Social licences would also favour a proactive approach to the SDGs, for they would favour a competition between big tech companies to go beyond laws and regulations positioned within the legal system, to gain credibility and social permission practices (Capasso and Umbrello 2023). Such a model would guarantee more transparency from a societal perspective, as it would build on structuring trust and consent of people and communities affected by the business model’s actions at stake. Also, social licences can be an effective tool for digital business models to ensure sustainable business growth (Capasso and Umbrello 2023). Big tech has its users and consumer groups at their core, and social licences would aim to create bilateral processes, through an ongoing dialogue with users’ communities and relevant stakeholders. Liiv et al. (2023) proposes the idea of a computer-aided method for corporate sense-making and prioritisation of SDGs. This would overcome the current SDGs assessment tools and methods, which are rather fragmented, and accompany the incorporation of SDGs into business models. Liiv et al. (2023) theorises that novel technologies and data analytics can be used for supporting the assessment process. The research presents a customised version of Thomas Saaty’s Analytic Hierarchy On the topic (‘Towards a Green Energy Economy? The EU Energy Union’s Transition to a LowCarbon Zero Subsidy Electricity System – Lessons from the UK’s Electricity Market Reform’ 2016). 7 The concept of a social licence to operate relates to organisational studies and corporate social responsibility aiming to integrate legitimacy in corporate strategy (Morrison 2014). 6
18
F. Mazzi et al.
Process, tailored for SDGs assessment to structure and organise decision processes and facilitate group decision-making (Liiv et al. 2023). It shows that the proposed process supports better SDGs-related internal communication and allows for identifying new business opportunities and more efficient solutions for the goals that are perceived as a priority by the company (Liiv et al. 2023). Finally, AI can enable a circular economy by helping companies adopt and innovate circular business models (Ghoreishi et al. 2023). Circular economy requires a strong integration and connection of the value chain, which brings data economy at the centre: data on resource flows, location tracking, monitoring condition and quality, real-time data gathering, processing of input-output flows, precise prediction, lower production downtime, and optimisation of energy consumption are essential (Hughes et al. 2021). AI and other technologies of Industry 4.0, such as the Internet of things, enable the collection, storage, analysis, and processing of these data, favouring resource and energy efficiency towards a sustainable circular economy (Ghoreishi et al. 2023). In general, AI-enhanced products and services can tackle environmental problems through independent interactions with their surroundings and self-learning capabilities, which results in improved environmental performance characteristics (Ghoreishi et al. 2023). And if adopted in the context of a circular economy, they can help with circular value creation, in line with the SDGs (Circular Economy and Sustainable Development 2019). However, as mentioned above, in relation to sustainable finance, such a transition from a linear to a circular business model requires costs and, in some cases, industrial conversion (Sharma et al. 2021). As identified by Sharma et al. (2021), some of the main impediments are capital requirements, higher initial cost for updating facility, risk and uncertainty, and lack of institutional and legal support. The impediments are greater for developing countries, due to a lack of public awareness, ambiguous policy frameworks, and insufficient knowledge (Sharma et al. 2021).
4 AI and Communities 4.1 The Challenges One of the areas of risk when developing AI for SDGs concerns its impact on vulnerable and marginalised people, who are at higher risk of harm from AI deployment (Ong and Findlay 2023). To protect local communities, countries should be able to define for themselves development and progress, according to what they value to retain and conserve in their domestic sphere (Ong and Findlay 2023). However, the power asymmetry outlined above threatens to challenge any locally engaged 2030 Agenda, for countries’ socio-economic structures and information become reliant on AI technologies and consequently influenced by big tech decisions (Ong and Findlay 2023).
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
19
AI can either empower or disempower community-relevant meanings. Ong and Findlay (2023) argue that because of a revenue-based system, big tech models might over-moderate or under-moderate content, impacting the space left for community meanings and magnifying its adverse effects in the Global South – for instance, in Afghanistan and Myanmar, Facebook’s systemic lack of language support has allowed extremist language to flourish (Ong and Findlay 2023). Moreover, data is not readily accessible and processible in all countries equally. For example, Adeshina and Aina (2023) report that in underserved African communities, some organisations, e.g. health centres, do not understand the usefulness of data received and do not have organised infrastructure and data management resources to utilise the available data (Adeshina and Aina 2023). This situation leads most African researchers to use foreign-based data sets, which may not be a true reflection of the particularities common to such communities (Adeshina and Aina 2023). For example, even within the same country, there are substantial differences between rural and urban areas (Goralski and Keong Tan 2023). Unequal access to technology and the Internet exacerbates discrimination in areas central to various SDGs, such as developing countries and quality education. The digital divide exists between and within countries, across different social groups, and represents a problem in various sectors (Ong and Findlay 2023). For example, over-reliance on AI for sustainable finance may lead to unintentional harm, fostering exclusive inclusion (Pashang and Weber 2023). Due to the online- only nature of digital services, the digital divide can be an obstacle to some groups’ financial resources (Pashang and Weber 2023). AI can also create new realms of discrimination through data capture and claims of ownership (Ong and Findlay 2023). Discrimination towards certain groups can be perpetuated at different stages of the AI lifecycle. For example, at the design stage, social inequalities can exclude social groups and communities. At the application and deployment stage, AI-assisted technology could exacerbate existing discriminations, for example, through colonial relationships of dependency (Ong and Findlay 2023).
4.2 The Opportunities AI can help serve otherwise unserved communities and represent otherwise unrepresented groups. We report a proposed governance structure for AI in communities (Ong and Findlay 2023), and two use cases of AI for communities, in education and agriculture. Ong and Findlay (2023) argue that governance for AI in communities can allow communities to bargain for the responsible use of data and the sustainable application of AI technologies. He identifies two structural elements, i.e. digital self- determination and AI in community, as essential to achieving such governance (Ong and Findlay 2023). Digital self-determination is possible in a safe digital space where data subjects and their communities can easily and freely decide on the use
20
F. Mazzi et al.
of AI, access, visualisation, and management of their data, and where market players adopt practices towards data that preserve data subjects’ dignity (Remolina and Findlay 2021). AI in a community represents a contextual method of deployment, achievable by prioritising human recipients and creating relationships of trust between AI deployers and users, so that negative consequences of tech rollout in vulnerable economies can be minimised (Ong and Findlay 2023). AI as a partner in community relationships sustains equitable social bonds through relationships of trust. Individuals within these communities are recipients and active participants empowered by digital self-determination. In this way, AI can create relationships through the embodiment of the intentions of those who design and deploy the technology (Findlay and Wong 2021); and such communities and relationships are chosen by their members (Ong and Findlay 2023). However, understanding of digital self-determination requires careful consideration of the boundaries between public good and self-determination, balancing between the risk of surveillance and a “forced” representation on one side and exclusion and local ideology on the other side. Such tension is not different from the one that arose in relation to contact tracing applications, where public health concerns had to be balanced with individuals’ privacy (Kolasa et al. 2021). We propose a non-exhaustive list of examples of how AI can help communities. AI can be used to reduce the digital divide in relation to education. Education has high costs, and some social groups cannot afford it in some countries where it is not public (Goralski and Keong Tan 2023). Moreover, the digital divide impacts education: India, for example, is home to 430 million children between the ages of 0 and 18, and the country has the largest population of children in the world (Goralski and Keong Tan 2023). AI could help by identifying the areas where education tools are most needed, providing remote-learning solutions, and delivering interactive learning facilitated by digitised devices, such as smart-boards, LCD screens, and multimedia videos, to make the classroom interesting and engaging to students (Goralski and Keong Tan 2023). Even in such use cases, when AI can help provide quality education, the ethical debate concerning the power of those who have the information in the infosphere (Floridi 2014) and the need to preserve local culture and history to avoid a colonial approach remain quintessentially human. Another example is shown by Jaynes et al. (2023), arguing that community- based education on and with AI positively impacts the ability of mountain communities to achieve their attainment of the 2030 Agenda’s Goals. The authors mainly focus on the use of AI to adopt a “Student Engaged Learning Model” for mountainous and rural populations, which have unique concerns and challenges that often prevent them from being as engaged in technological adoption and development (Jaynes et al. 2023). Balancing the concerns of these communities is not a simple issue to address in the face of urban economic disparities and mentalities that divide “developed” and “rural” areas in politics and economics (Jaynes et al. 2023). The second use case concerns AI in agriculture. Agriculture is often essential to rural communities, and sustainable farming solutions, for example, are of great importance for the development of such communities. The multitude of
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
21
interconnected challenges such as scarce and stressed resources (land, water, soil etc.), fluctuating outputs and increasing demands, changing weather and rainfall patterns, and environmental pollution are all factors that can benefit from innovative solutions. AI and digital technologies can help communities address the issues of food insecurity, agricultural productivity, and higher yields for the coming future (Ziesche et al. 2023). “Smart agriculture”, intended as the integration of technologies like the Internet of things (IoT), AI, robots, drones, etc., in agricultural production and management, has the potential to narrow the supply-demand gap and optimise the use of natural and human resources while allowing the maximisation of quality output (Ziesche et al. 2023). A use case of how AI is used for sustainable agriculture is represented by Deep Planet, which utilises satellite imagery to allocate resources across the farmland based on the monitoring results from satellite imagery (Efremova et al. 2023). The proposed tool made possible to evaluate grassland, shrubland, and forest biomass and to estimate the vegetation carbon stock over the conservancy and larger Masai Mara Region (Efremova et al. 2023). This aligns with different goals, targets, and indicators, including indicator 2.4.1 “proportion of agricultural area under productive and sustainable agriculture” (Efremova et al. 2023).
5 AI and Individuals 5.1 The Challenges Among the various threats that individuals may perceive from AI, we focus on privacy and accountability concerns, as subfields of AI ethics, and on trust, intended as the level of reliance on AI. The choice of these factors is based on the limitation of the present chapter that focuses on the main themes emerging from the book chapters. The topics of AI ethics, ethical principles governing AI, and ethical auditing have been largely debated in recent years and are precursors of AI for SDGs, a sort of necessary condition for the existence of AI for social good (Cowls et al. 2020). Currently, neo-liberal individualism tags AI to economic growth (Ong and Findlay 2023) and the lack of a uniform regulation risks to exacerbate forms of ethics washing (Wagner 2018). Among various risks, individuals can be threatened by personal data processing and related privacy concerns. Similarly, it is not clear how to determine who should be held responsible for the recommendations and decisions made by AI systems, raising concerns over accountability and interpretability (Hickok 2021; Morley et al. 2021). These concerns in relation to AI are partly governed in Europe by the General Data Protection Regulation, for example,
22
F. Mazzi et al.
through Art. 22, the right not to be subject to solely automated decisions.8 However, due to its formulation, and to the lack of legal precedents immediately applicable, the level of protection provided in relation to AI might still be vague from an individual’s perspective.9 Therefore, individuals are confronted with ethical risks and they might be left with no definitive answers. Such ethical concerns can, in turn, influence the second challenge of AI and individuals, i.e. trust. Indeed, a lack of legal certainty in case of unjust AI outcomes might hinder the acceptance of AI for social good in society. However, trust in AI does not only derive from unaddressed ethical concerns. AI changes traditional relationship paradigms. For example, in healthcare, the relationships between patients and healthcare professionals are different from the relationship between patients and AI (Sirmacek et al. 2023). The acceptance of AI in healthcare settings might depend on healthcare workers educating the patient about the complexities of AI and its possible shortcomings (Sirmacek et al. 2023). This could come with an additional burden for health professionals who may be required to get additional training on the latest advances in AI, increasing their workload (Sirmacek et al. 2023). Another challenge may lie with the patient accepting to receive AI solutions and with healthcare professionals being confident to delegate to AI, since their perception of AI is crucial for the successful implementation and deployment of new systems (Sirmacek et al. 2023).
5.2 The Opportunities Governance and regulation can play an irreplaceable role in addressing AI-related ethical concerns. Despite self-regulation and codes of conduct emanated by AI and tech companies that could have played a significant role in setting standards and gaining people’s trust, the efforts were insufficient and resulted in a missed opportunity (Floridi 2021b). However, pending policy action, existing frameworks can guide designers towards the best practices for designing AI for social good (Capasso and Umbrello 2023; Cowls et al. 2020). As mentioned above, the draft EU regulation will fill a gap, for it contains substantial references to ethical principles and puts the protection of the individual at the core (Benedetti del Rio 2023). Therefore, a Brussels effect might be desirable to level legal certainty and set a high ethical standard for AI development for jurisdictions that share similar underlying values. Nonetheless, the AI Act represents a first step, far from being exhaustive, as discussed in Sect. 2. The ethical challenges deriving from AI are likely to require other regulations, such as sector-specific legislation, to protect workers (Ponce 2020). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 9 Article 22 refers to solely automated decision-making process, which renders the interpretation of “solely” crucial to the determination of the scope of a right of explanation (Bayamlıoğlu n.d.). 8
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
23
A gradual introduction of AI solutions and the evidence of the benefits that they can bring to individuals can help gain trust in AI. Here we report some use cases from the chapters of the book. AI can be used to provide legal identity for every individual (Forti 2023). The recognition of identity represents a fundamental right that is not available to everyone. Those who do not enjoy it are excluded from the socio-economic life and from benefitting from public services (Forti 2023). Therefore, the implementation of AI to provide legal identity to those who do not have it would constitute a benefit of AI for individuals. Forti (2023) underlines that if lawmakers and regulators provide appropriate human rights safeguards, AI could help accomplish SDG 16.9. At the same time, we should not underestimate the ethical concerns deriving from the use of AI to provide legal identities, for a large “amount of big data such as people’s life history, health, behaviours, and interconnected networking will be exponentially collected and operated to reveal each uniqueness and identity in the context of data assimilation for commercial and governmental solutions. There are rich data on their privacy which should be kept by strict regulation and legal backgrounds” (Shibuya 2020). Moreover, it poses philosophical questions, for we are living an information revolution that may have radical consequences on our self-understanding and the constructions of our own identities (Floridi 2011). AI in healthcare is essentially related to trust. The accuracy of AI can be tested and proved gradually, from less to more invasive applications. Goralski and Keong Tan provide a few examples, such as prioritising care to patients under the limitation of resources such as medical equipment or hospital beds; estimating the probability of having or risk of developing a medical condition given a patient’s family history or own historical data and examinations; and monitoring patients and suggesting possible follow-ups, treatments, or patient’s outcome based on the patient’s condition, its severity, its risk of degradation, and available alternative actions (Goralski and Keong Tan 2023). Moreover, human revision represents an additional safeguard for patients familiarising with AI-assisted practices (Goralski and Keong Tan 2023).
6 Jobs and Skills 6.1 The Challenges The positive outcomes of developing and deploying AI come with challenges concerning human resources, and the case of AI for SDGs is no exception. We identify a twofold challenge. On one side, AI has been perceived as a threat to human jobs for a long time, stimulating research to develop prospects of AI’s impact on employment. On the other side, the increasing deployment of AI solutions requires human capital with digital skills familiarity with AI. Such skills are not structurally provided to new generations by education systems.
24
F. Mazzi et al.
As stressed by Benedetti del Rio (2023), while regulating AI is an important step towards the development of AI for social good, governments should also consider the risks that the development of AI may have on the job market. Governments’ role will be essential in managing the impact of AI on employment and delivering digital skills through education systems at different levels. Both public and private sectors should invest in AI education and programmes (Adeshina and Aina 2023). The challenge here relates to providing an education that should not be limited to individuals who work as AI engineers or researchers, for a basic level of familiarisation with AI and data will be increasingly needed. This creates a debate concerning the potential need for new programmes for the school’s curriculum from the primary level, so that the next generation will have some basic understanding of AI and its application (Adeshina and Aina 2023). Implementing AI-related education is also needed to address some of the challenges listed above, such as AI and communities. Digital skills should also be delivered in areas that would otherwise risk being left behind, and where the digital divide could spread (Jaynes et al. 2023). However, the challenge often remains in being able to employ these workers trained according to non-mountainous city policies in local communities where they cannot be adequately matched (Jaynes et al. 2023). Moreover, it would be desirable to develop AI-related education programmes in remote areas that create a new pathway for local businesses and universities to secure emerging talent through direct-hire programmes via educational training and other related projects (Jaynes et al. 2023).
6.2 The Opportunities The twofold challenge delineated above is of primary importance to limit unwanted outcomes of deploying AI for SDGs. Therefore, solutions need to address that the use of AI for SDGs does not (i) impact employment negatively, and consequently, the level of poverty and inequalities, and (ii) underdeliver, because of a lack of digital skills in the offer of the job market. As for the first challenge, studies on the impact of AI on employment present different outcomes and predictions depending on the sector, the skills, and the parameters identified (Bessen 2018; Acemoglu et al. 2020). On the one hand, the issue of AI replacing humans for specific jobs has been perceived as a threat for a long time (Rajnai and Kocsis 2017)10 but the demand for AI-related labour increased dramatically in recent years (Alekseeva et al. 2021), not necessarily aligned with an increase of the offer of AI-skilled workers. Also, if the population’s occupation and source of income are managed, AI replacing jobs can improve the quality of life. It can replace us in jobs that we are not willing to do, and it is creating and will On the topic (‘Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda: Computer Science & IT Journal Article | IGI Global’ n.d.) 10
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
25
increasingly create new occupations (Floridi 2017). There is no black or white situation, but rather an opportunity to implement a governance mechanism that favours a transition towards AI integration in different areas, without exacerbating inequalities. As mentioned, the role of governments will be crucial in managing employment (Benedetti del Rio 2023). As for the second aspect, governance is also crucial in implementing AI-oriented education policies and ensuring that right-skilling programmes match skills supply to skills demand (Stephenson et al. 2023). Moreover, there are different ways in which AI can be used to facilitate the acquisition of digital skills. We provide two examples: the first one, suggested by How et al. (2023), focuses on the fact that it is difficult for educators or social scientists who are not trained in computer science to code and implement AI algorithms or understand them. Therefore, they suggest a user-friendly, low-code, human-centric probabilistic strategy that can democratise AI usage, thus allowing analysts who are not computer scientists to use AI for social good. The second one concerns the increasing inclusion of AI (specifically, AI for SDGs) in cultural spaces, such as museums (Taurino 2023). Taurino (2023) illustrates how algorithmic art can help framing sustainable futures, arguing that promoting algorithmic design diversity might positively impact inclusive innovations in ethical AI. This, in turn, can stimulate people’s willingness to acquire digital skills.
7 Impact Assessment 7.1 The Challenges Impact assessments are evidence-based procedures that assess a given factor’s economic, social, and environmental effects. Since they provide a structure that allows for monitoring and measuring the impact of specific actions, they can be crucial tools to achieve the SDGs. AI and data-based impact assessment can deliver accurate results. However, developing SDGs-related impact assessment is not straightforward. The interconnection between different SDGs at indicators and target levels creates a high level of complexity (Efremova et al. 2023) (Mirghaderi 2023). It is difficult to incorporate all the relevant considerations in algorithms: for example, in finance (and not only), governance mechanisms must also confront the duality of what is considered “good” (Pashang and Weber 2023). AI-driven solutions towards ESG (environmental, social, and governance) factors can be useful to investors when evaluating a firm’s sustainability activities. However, it should be noted that the array of ethical, inclusion, and environmental factors are difficult to integrate and could potentially compromise progress towards the SDGs (Pashang and Weber 2023). Also, accurate SDGs-related impact assessments require a vast amount of data of different kinds that might not be retrievable. And even if retrievable, they
26
F. Mazzi et al.
need to be integrated and connected according to the connections between the SDGs, through many layers. Finally, sustainability-related impact assessments have been developed by different stakeholders and in different contexts, but they lack homogeneity, for there is no shared standard. The absence of shared meanings to measure impact hinders the coordination efforts described in Sect. 2. Sustainability Impact Assessments ideally should ensure equivalency through agreements on standards and certifications (Stephenson et al. 2023).
7.2 The Opportunities Despite the objective difficulties in achieving a functioning and homogeneous method to perform SDGs-related impact assessments, some solutions can help improve the situation. For example, AI can be used to design a space of action through the Doughnut economics model. The Doughnut consists of two concentric rings: a social foundation, to ensure that no one is left falling short on life’s essentials, and an ecological ceiling, to ensure that humanity does not collectively overshoot the planetary boundaries that protect Earth’s life-supporting systems. Prifti considers a third dimension to include freedom of determination and choice (Prifti 2023). Using such a model can help identify AI applications that may violate the ecological ceiling or the social foundation, AI applications that support one threshold but violate the other, and AI applications that support both thresholds but may violate human dignity (Prifti 2023). The Doughnut model, as a conceptual representation of the idea that the outcome of our activities should be subject to these constraints, is useful in the context of impact assessment to design desirable spaces of action. Efforts at measuring the use of AI for one SDG are promising. Gupta and Degbelo (2023) provide an example through the analysis of the contribution of AI to support the progress of SDG 11 (sustainable cities and communities). They address the knowledge gap by empirically analysing the AI systems (N = 29) from the AI×SDG database and the Community Research and Development Information Service (CORDIS) database (Gupta and Degbelo 2023). The analysis reveals that AI systems have indeed contributed to advancing sustainable cities in several ways (e.g. waste management, air quality monitoring, disaster response management, transportation management), providing a snapshot of AI’s impact on SDG 11, still inherently partial yet useful to advance the overall understanding on the impact of AI systems for the social good (Gupta and Degbelo 2023). The required combination of data from different sources is a challenge, but solutions can improve the situation. Sirmacek et al. (2023), for example, describe how Earth observation satellites are now acquiring a massive amount of satellite imagery with higher spatial resolution and frequent temporal coverage. These types of big data represent an excellent opportunity to develop innovative methodologies, among others, for urban mapping, for understanding climate change (Sirmacek et al. 2023),
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
27
and for sustainable agriculture, as mentioned in Sect. 3 (Efremova et al. 2023). Moreover, creating a data catalogue (Spezzati et al. 2023) and an SDGs index (Mirghaderi 2023) can help advance the data structure’s status quantitatively and qualitatively for future impact assessments. We should not perceive the acknowledgement of the interconnection between SDGs only as a limit to achieving a perfect impact assessment, for it can help to theorise positive connections and monitor them. For example, Garcia-Mico and Laukyte (2023) focus on the use cases that evidence the lack of female health data in developing AI-based medical solutions, explains the link between gender- balanced AI tools in medicine and SDGs, and shows how more gender-balanced and inclusive AI-based medical tools allow improving female health and other SDGs, such as those related to good health, economic growth, innovation, and reduced inequalities. Finally, monitoring the impact of AI use cases in relation to the SDGs can be made with different levels of granularity. The role of AI and IoT in monitoring single applications contributes to creating a substrate of data that can be useful for higher levels of analysis later. An example is provided by Dziri and Ezzedine (2023), who describes a detailed architecture of a smart grid system aimed at preventing any intrusion into water distribution systems and detecting pollution momentarily. Such grids work with a monitoring system for water quality analysis based on machine learning (Dziri and Ezzedine 2023). The data gained through such a system potentially allow us to evaluate the energy consumption of the AI and the level of advancement of the indicators concerning potable water.
8 Evaluation and Limitations This section highlights an underlying “fil rouge” that connects the different topics, being it an inhibitor of opportunities and an enabler of challenges. This obstacle is the lack of agreement at the international level on shared principles, specifically concerning the recognition of human rights. There are strong connections between human rights and the objectives of the 2030 Agenda for Sustainable Development and the SDGs (Kaltenborn et al. 2020). However, there was considerable disappointment that the SDGs had not reflected the advice provided by global leaders and grassroots activists to keep human rights central to the new development era (Winkler and Williams 2017). While debates may persist regarding the merits of the SDGs approach, the SDGs and human rights share a common centre in their concern for human happiness and well-being (Collins 2018). Human rights norms, standards, and tools can help to inform and guide actions towards these commitments, including how human rights monitoring mechanisms can play a role in tracking progress and providing a space for accountability (Saiz and Donald 2017). However, the phrasing around the rights-related
28
F. Mazzi et al.
terms avoids recognition of the obligations of state and non-state duty-bearers and fails to address rights as (legal) entitlements (Williams and Blaiklock 2016).11 Substantial international agreement on human rights together with recognition of applicable international law would facilitate both AI governance and SDGs implementation. It can foster effective cooperation between both countries and stakeholders and ensure individuals’ entitlement. It would generate protection of communities by default and encourage individuals’ trust in AI governance and political action. A substantial human-rights-first approach (as opposed to a superficial approach to human rights that fails to empower the participation of those already left behind to claim their rights, as described by Williams and Blaiklock (2016)) would facilitate the introduction and the acceptance of new AI-related programmes and employment solutions. It would stimulate the adoption of shared practices based on common principles, including reports and impact assessment, that would, in turn, generate more reliable, quality data. The lack of substantial recognition of applicable human rights’ law can have multiple causes: the lack of political willingness (Vivero Pol and Schuftan 2016), difficulties in communication (Khan and Mishra 2022), cultural barriers (Izugbara et al. 2022), and the soft-law nature of some international conventions on human rights (‘A New Dawn for the Human Rights of International Migrants? Protection of Migrants’ Rights in Light of the UN’s SDGs and Global Compact for Migration | International Journal of Law in Context | Cambridge Core’ n.d.), to name a few. Analysing the root of the problem is outside the scope of this paper. Our contribution does not aim to provide a conclusive answer as to what governance, ethical, legal, and social challenges and opportunities AI for SDGs initiatives poses; it has the goal to offer an overview of priorities to maximise the positive impact and minimise risks of AI for SDGs initiatives according to the authors of the book. We underline that further substantial recognition of international human rights law by the relevant stakeholders is a key element to work towards the achievement of the SDGs.
9 Concluding Remarks We provided an overview of six recurring challenges that, if left unaddressed, can hinder the adoption of AI for SDGs solutions, their scalability, and the ethical outcome of their deployment. The enthusiasm towards AI for sustainability should be balanced with a cautious attitude and a realist account. Nonetheless, identifying and acknowledging difficulties should foster the search for feasible solutions. Thus, we “Using the discourse of human rights but without reflecting the full intent of human rights promotes a customary usage of the terms that undermines the meaning of ‘human rights.’ While this cannot effect actual State obligations, it can have serious implications for people’s and duty-bearers’ understanding of human rights entitlements, as well as for accountability and civil society monitoring of human rights situations” (Williams and Blaiklock 2016). 11
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
29
presented and analysed each challenge with related opportunities and AI use cases that can help solve the problem or, at least, improve it. However, our categorisation leaves open questions that require further research, such as how to find a balance between groups involvement and privacy preservation, how to interpret self- determination, and how to envisage the creation of a Board to maximise the benefit of AI for SDGs, to name a few. Finally, the paper identified an obstacle to the maximisation of AI for SDGs implementations that is common to the six areas identified in the chapter, i.e. the lack of agreement on human rights at the international level. Many more steps are needed to maximise the benefits of AI for SDGs.
References ‘A New Dawn for the Human Rights of International Migrants? Protection of Migrants’ Rights in Light of the UN’s SDGs and Global Compact for Migration | International Journal of Law in Context | Cambridge Core’. n.d.. https://ezproxyprd.bodleian.ox.ac.uk:2117/core/journals/ international-journal-of-law-in-context/article/new-dawn-for-the-humanrightsof-international-migrants-protection-of-migrants-rights-in-light-of-the-uns-sdgs-and-global-compactformigration/ 2C4F58FA0E47AB747CDE0BA855A6E4B5. Accessed 26 Apr 2022. Acemoglu, Daron, David Autor, Jonathon Hazell, and Pascual Restrepo. 2020. AI and Jobs: Evidence from Online Vacancies. Working Paper 28257. Working Paper Series. National Bureau of Economic Research. https://doi.org/10.3386/w28257. Adeshina, S.A., and O. Aina. 2023. The Role of Artificial Intelligence in SDG: An African Perspective. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Alekseeva, Liudmila, José Azar, Mireia Giné, Sampsa Samila, and Bledi Taska. 2021. The Demand for AI Skills in the Labor Market. Labour Economics 71 (August): 102002. https:// doi.org/10.1016/j.labeco.2021.102002. Arent, Douglas Jay, ed. 2017. The Political Economy of Clean Energy Transitions. Oxford: Oxford University Press. Arner, Douglas W., Ross P. Buckley, Dirk A. Zetzsche, and Robin Veidt. 2020. Sustainability, FinTech and Financial Inclusion. European Business Organization Law Review 21 (1): 7–35. https://doi.org/10.1007/s40804-020-00183-y. Bayamlıoğlu, Emre. n.d. The Right to Contest Automated Decisions under the General Data Protection Regulation: Beyond the so-Called “Right to Explanation”. Regulation & Governance n/a (n/a). https://doi.org/10.1111/rego.12391. Accessed 17 Feb 2022. Benedetti del Rio, V. 2023. Ethical AI – The European Approach to Achieving the SDGs Through Artificial Intelligence. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Bessen, James. 2018. AI and Jobs: The Role of Demand. Working Paper 24235. Working Paper Series. National Bureau of Economic Research. https://doi.org/10.3386/w24235. Capasso, M., and S. Umbrello. 2023. Big Tech Corporations and Artificial Intelligence: A Social License to Operate and Multi-Stakeholder Partnerships in the Digital Age. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Chen, Qi, Wei Wang, Kaizhu Huang, Suparna De, and Frans Coenen. 2021. Multi-Modal Generative Adversarial Networks for Traffic Event Detection in Smart Cities. Expert Systems with Applications 177 (September): 114939. https://doi.org/10.1016/j.eswa.2021.114939. Chomanski, Bartlomiej. 2021. The Missing Ingredient in the Case for Regulating Big Tech. Minds and Machines 31 (2): 257–275. https://doi.org/10.1007/s11023-021-09562-x.
30
F. Mazzi et al.
Circular Economy and Sustainable Development. 2019. Academic. https://doi.org/10.1016/ B978-0-12-815267-6.00006-2. Collins, Lynda M. 2018. ‘Sustainable Development Goals and Human Rights: Challenges and Opportunities’. Sustainable Development Goals, June, 66–90. Cowls, Josh, et al. 2020. How to Design AI for Social Good: Seven Essential Factors. SpringerLink. 2020. https://link.springer.com/article/10.1007/s11948-020-00213-5. Cowls, Josh, Andreas Tsamados, Mariarosaria Taddeo, and Luciano Floridi. 2021a. A Definition, Benchmark and Database of AI for Social Good Initiatives. Nature Machine Intelligence 3 (2): 111–115. https://doi.org/10.1038/s42256-021-00296-0. ———. 2021b. ‘The AI Gambit: Leveraging Artificial Intelligence to Combat Climate Change— Opportunities, Challenges, and Recommendations’. AI & SOCIETY, October. https://doi. org/10.1007/s00146-021-01294-x. Di Vaio, Assunta, Rosa Palladino, Rohail Hassan, and Octavio Escobar. 2020. Artificial Intelligence and Business Models in the Sustainable Development Goals Perspective: A Systematic Literature Review. Journal of Business Research 121 (December): 283–314. https://doi. org/10.1016/j.jbusres.2020.08.019. Dziri, J., and T. Ezzedine. 2023. Smart Control of Drinking-Water Grids Using IoT. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Efremova, N., J. ConradFoley, A. Unagaev, and R. Karimi. 2023. Artificial Intelligence for Sustainable Agriculture and Rangeland Monitoring. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Findlay, Mark, and Willow Wong. 2021. Trust and Regulation: An Analysis of Emotion. SSRN Scholarly Paper. Rochester. https://doi.org/10.2139/ssrn.3857447. Floridi, Luciano. 2011. The Informational Nature of Personal Identity. Minds and Machines 21 (4): 549. https://doi.org/10.1007/s11023-011-9259-6. ———. 2014. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. Oxford University Press UK. ———. 2017. Robots, Jobs, Taxes, and Responsibilities. Philosophy & Technology 30 (1): 1–4. https://doi.org/10.1007/s13347-017-0257-3. ———. 2021a. Introduction – The Importance of an Ethics-First Approach to the Development of AI. In Ethics, Governance, and Policies in Artificial Intelligence, Philosophical Studies Series, ed. Luciano Floridi, 1–4. Cham: Springer. https://doi.org/10.1007/978-3-030-81907-1_1. ———. 2021b. The End of an Era: From Self-Regulation to Hard Law for the Digital Industry. Philosophy & Technology 34 (4): 619–622. https://doi.org/10.1007/s13347-021-00493-0. Forti, M. 2023. A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks in Using Artificial Intelligence Algorithms to Accomplish SDG 16.9. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. García-Micó T. G. and Laukyte M. 2023. Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact the Sustainable Development Goals. In The Ethics of Artificial Intelligence for the Sustainable Development Goals, Springer. Ghoreishi, M., L. Treves, R. Teplov, and M. PynnoÃànen. 2023. The Impact of Artificial Intelligence on Circular Value Creation for Sustainable Development Goals. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Ghosh, Sadhan Kumar. 2020. Introduction to Circular Economy and Summary Analysis of Chapters. In Circular Economy: Global Perspective, ed. Sadhan Kumar Ghosh, 1–23. Singapore: Springer. https://doi.org/10.1007/978-981-15-1052-6_1. Gill, Amandeep S., and Stefan Germann. 2021. Conceptual and Normative Approaches to AI Governance for a Global Digital Ecosystem Supportive of the UN Sustainable Development Goals (SDGs). AI and Ethics 1–9. https://doi.org/10.1007/s43681-021-00058-z. Goralski, M., and T. Keong Tan. 2023. Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced Inequalities in a Post-covid World. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer.
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
31
Goudarzi, Shidrokh, Mohammad Hossein Anisi, Nazri Kama, Faiyaz Doctor, Seyed Ahmad Soleymani, and Arun Kumar Sangaiah. 2019. Predictive Modelling of Building Energy Consumption Based on a Hybrid Nature-Inspired Optimization Algorithm. Energy and Buildings 196 (August): 83–93. https://doi.org/10.1016/j.enbuild.2019.05.031. Gupta, S., and A. Degbelo. 2023. An Empirical Analysis of AI Contributions to Sustainable Cities (SDG11). In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Hickok, Merve. 2021. Lessons Learned from AI Ethics Principles for Future Actions. AI and Ethics 1 (1): 41–47. https://doi.org/10.1007/s43681-020-00008-1. Holzinger, Andreas, Edgar Weippl, A. Min Tjoa, and Peter Kieseberg. 2021. Digital Transformation for Sustainable Development Goals (SDGs) – A Security, Safety and Privacy Perspective on AI. In Machine Learning and Knowledge Extraction, Lecture Notes in Computer Science, ed. Andreas Holzinger, Peter Kieseberg, A. Min Tjoa, and Edgar Weippl, 1–20. Cham: Springer. https://doi.org/10.1007/978-3-030-84060-0_1. How, M. L., Cheah S.M., Chan Y. J., Khor A. C., and Say E. M. P. 2023. Artificial Intelligence for Advancing Sustainable Development Goals (SDGs): An Inclusive Democratized Low-Code Approach. In The Ethics of Artificial Intelligence for the Sustainable Development Goals, Springer. Hughes, Maria, Reetta Kohonen, Antti Lehtinen, Anu Mänty, Mika Sulkinoja Lauri Byckling, Nina Ahola, Ella Tolonen, et al. 2021. ‘The Winning Recipe for a Circular Economy-What Can Inspiring Examples Show Us?’ Izugbara, Chimaraoke, Meroji Sebany, Frederick Wekesah, and Boniface Ushie. 2022. “The SDGs Are Not God”: Policy-Makers and the Queering of the Sustainable Development Goals in Africa. Development Policy Review 40 (2): e12558. https://doi.org/10.1111/dpr.12558. Jaynes, T.L., B. Abdrisaev, and L. MacDonald Glenn. 2023. Socially Good AI Contributions for the Implementation of Sustainable Development in Mountain Communities Through an Inclusive Student Engaged Learning Model. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Kaltenborn, Markus, Markus Krajewski, and Heike Kuhn, eds. 2020. Sustainable Development Goals and Human Rights. Springer Nature. https://doi.org/10.1007/978-3-030-30469-0. Khan, Shabana, and Jyoti Mishra. 2022. Critical Gaps and Implications of Risk Communication in the Global Agreements—SFDRR, SDGs, and UNFCCC: 3 Select Case Studies from Urban Areas of Tropics in South Asia. Natural Hazards 111 (3): 2559–2577. https://doi.org/10.1007/ s11069-021-05148-z. Kolasa, Katarzyna, Francesca Mazzi, Ewa Leszczuk- Czubkowska, Zsombor Zrubka, and Márta Péntek. 2021. State of the Art in Adoption of Contact Tracing Apps and Recommendations Regarding Privacy Protection and Public Health: Systematic Review. JMIR MHealth and UHealth 9 (6): e23250. https://doi.org/10.2196/23250. Liengpunsakul, Subin. 2021. Artificial Intelligence and Sustainable Development in China. The Chinese Economy 54 (4): 235–248. https://doi.org/10.1080/10971475.2020.1857062. Liiv, I., E. Karo, and R.-M. Soe. 2023. Computer Aided Corporate Sense-Making and Prioritization for SDGs. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Migliorelli, Marco. 2021. What Do We Mean by Sustainable Finance? Assessing Existing Frameworks and Policy Risks. Sustainability 13 (2): 975. https://doi.org/10.3390/su13020975. Mirghaderi, S.H. 2023. Artificial Neural Networks Predict Sustainable Development Goals Index. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Mökander, Jakob, and Luciano Floridi. 2021. Ethics-Based Auditing to Develop Trustworthy AI. Minds and Machines 31 (2): 323–327. https://doi.org/10.1007/s11023-021-09557-8. Morley, Jessica, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander, and Luciano Floridi. 2021. Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds and Machines 31 (2): 239–256. https://doi.org/10.1007/s11023-021-09563-w.
32
F. Mazzi et al.
Morrison, John. 2014. The Social License. In The Social License: How to Keep Your Organization Legitimate, ed. John Morrison, 12–28. London: Palgrave Macmillan. https://doi. org/10.1057/9781137370723_2. Mulgan, G. 2023. Joined Up Thinking on How Artificial Intelligence Can Contribute to the Sustainable Development Goals. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Ong, L.M., and M. Findlay. 2023. A Realist’s Account of AI For SDGs: Power, Inequality and Artificial Intelligence in Community. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Pashang, S., and O. Weber. 2023. Artificial Intelligence for Sustainable Finance: Governance Mechanisms for Institutional and Societal Approaches. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Pol, Vivero, Jose Luis, and Claudio Schuftan. 2016. No Right to Food and Nutrition in the SDGs: Mistake or Success? BMJ Global Health 1 (1): e000040. https://doi.org/10.1136/ bmjgh-2016-000040. Ponce, Aida. 2020. Labour in the Age of AI: Why Regulation Is Needed to Protect Workers, SSRN Scholarly Paper 3541002. Rochester: Social Science Research Network. https://doi. org/10.2139/ssrn.3541002. Prifti, K. 2023. Missing Circles: A Dignitarian Approach to Doughnut Economics through Artifical Intelligence Applications. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Rajnai, Zoltán, and István Kocsis. 2017. Labor Market Risks of Industry 4.0, Digitization, Robots and AI’. In 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), 000343–46. https://doi.org/10.1109/SISY.2017.8080580. Remolina, Nydia, and Mark Findlay. 2021. The Paths to Digital Self-Determination – A Foundational Theoretical Framework. SSRN Scholarly Paper. Rochester. https://doi. org/10.2139/ssrn.3831726. Saiz, Ignacio, and Kate Donald. 2017. Tackling Inequality through the Sustainable Development Goals: Human Rights in Practice. The International Journal of Human Rights 21 (8): 1029–1049. https://doi.org/10.1080/13642987.2017.1348696. Sharma, Nagendra Kumar, Kannan Govindan, Kuei Kuei Lai, Wen Kuo Chen, and Vimal Kumar. 2021. The Transition from Linear Economy to Circular Economy for Sustainability among SMEs: A Study on Prospects, Impediments, and Prerequisites. Business Strategy and the Environment 30 (4): 1803–1822. https://doi.org/10.1002/bse.2717. Shibuya, Kazuhiko. 2020. Conclusion. Digital Transformation of Identity in the Age of Artificial Intelligence: 273–276. https://doi.org/10.1007/978-981-15-2248-2_14. Sierra, Leonardo A., Víctor Yepes, Tatiana García-Segura, and Eugenio Pellicer. 2018. Bayesian Network Method for Decision-Making about the Social Sustainability of Infrastructure Projects. Journal of Cleaner Production 176 (March): 521–534. https://doi.org/10.1016/j. jclepro.2017.12.140. Sirmacek, B., S. Gupta, F. Mallor, H. Azizpour, Y. Ban, H. Eivazi, H. Fang, F. Golzar, I. Leite, G.I. Melsion, K. Smith, F. Fuso Nerini, and R. Vinuesa. 2023. The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Spezzati, A., E. Kheradmand, K. Gupta, M. Peras, and R. Zaminpeyma. 2023. Leveraging Artificial Intelligence to Shrink the Data Gap to Advance Research on the Sustainable Development Goals. In The ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Stephenson, Matthew. 2020. How to Attract “Digital FDI” and Sustainable FDI for COVID-19 Recovery? SSRN Scholarly Paper. Rochester. https://doi.org/10.2139/ssrn.3621464. Stephenson, M., I. Lejarraga, K. Matus, Y. Mulugetta, M. Yarime, and J. Zhan. 2023. AI as a SusTech Solution: Enabling Artificial Intelligence and Other 4IR Technologies to Drive Sustainable Development Through Value Chains. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer.
AI in Support of the SDGs: Six Recurring Challenges and Related Opportunities…
33
Taddeo, Mariarosaria, and Luciano Floridi. 2018. How AI Can Be a Force for Good. Science (New York, N.Y.) 361 (6404): 751–752. https://doi.org/10.1126/science.aat5991. ———. 2021. How AI Can Be a Force for Good – An Ethical Framework to Harness the Potential of AI While Keeping Humans in Control. In Ethics, Governance, and Policies in Artificial Intelligence, Philosophical Studies Series, ed. Luciano Floridi, 91–96. Cham: Springer. https:// doi.org/10.1007/978-3-030-81907-1_7. Taurino, G. 2023. Algorithmic Art and Cultural Sustainability in the Museum Sector. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer. Tomašev, Nenad, Julien Cornebise, Frank Hutter, Shakir Mohamed, Angela Picciariello, Bec Connelly, Danielle C.M. Belgrave, et al. 2020. AI for Social Good: Unlocking the Opportunity for Positive Impact. Nature Communications 11 (1): 2468. https://doi.org/10.1038/ s41467-020-15871-z. Towards a Green Energy Economy? The EU Energy Union’s Transition to a Low-Carbon Zero Subsidy Electricity System – Lessons from the UK’s Electricity Market Reform. 2016. Applied Energy 179 (October): 1321–1330. https://doi.org/10.1016/j.apenergy.2016.01.046. Truby, Jon. 2020. Governing Artificial Intelligence to Benefit the UN Sustainable Development Goals. Sustainable Development (Bradford, West Yorkshire, England) 28 (4): 946–959. https:// doi.org/10.1002/sd.2048. Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 233. https://doi.org/10.1038/s41467-019-14108-y. Visvizi, Anna. 2022. Artificial Intelligence (AI) and Sustainable Development Goals (SDGs): Exploring the Impact of AI on Politics and Society. Sustainability (Basel, Switzerland) 14 (3): 1730. https://doi.org/10.3390/su14031730. Wagner, Ben. 2018. ‘Ethics as an Escape from Regulation.: From “Ethics-Washing” to EthicsShopping?’ in Being Profiled, edited by Emre Bayamlioğlu, Irina Baraliuc, Liisa Janssens, And Mireille Hildebrandt, 84–89. Cogitas Ergo Sum: 10 Years of Profiling the European Citizen. Amsterdam University Press. https://doi.org/10.2307/j.ctvhrd092.18. Williams, Carmel, and Alison Blaiklock. 2016. Human Rights Discourse in the Sustainable Development Agenda Avoids Obligations and Entitlements. International Journal of Health Policy and Management 5 (6): 387–390. https://doi.org/10.15171/ijhpm.2016.29. Winkler, Inga T., and Carmel Williams. 2017. The Sustainable Development Goals and Human Rights: A Critical Early Review. The International Journal of Human Rights 21 (8): 1023–1028. https://doi.org/10.1080/13642987.2017.1348695. Ziesche, S., S. Agarwal, U. Nagaraju, E. Prestes, and N. Singha. 2023. Role of Artificial Intelligence in Advancing Sustainable Development Goals in the Agriculture Sector. In The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer.
Joined Up Thinking on How AI Can Contribute to the SDGs Geoff Mulgan
Abstract This piece provides two frameworks for thinking about the relationship between AI and the global goals. It argues that while AI in all its forms is likely to play an important role in initiatives for achieving the SDGs, the focus in recent years on individual AI applications risks leading to disappointment. First, it situates the question within the broader issue of aligning global R&D to the SDGs. It shares recent data on degrees of alignment and misalignment and the scope for new arrangements to develop clearer pathways. These are emerging in other fields – from food to energy – but AI, and the digital world more generally, are behind. Second, it situates individual AI tools within a framework for mobilizing intelligence to address the SDGs in particular contexts – cities, regions and nations – and shows the multiple useful roles different forms of AI can play. Keywords AI · SDGs · Collective intelligence · Public policy
1 Background on AI for the SDGs and Changing Landscapes of R&D For many decades, AI was dominated by military research, surveillance (NSA and equivalents), university research and some commercial investment. In the 2010s, the scale of commercial research exploded. In 1960, a third of all global R&D was funded by the US defence department. This helped it drive through a series of technologies which later had other uses – microprocessors, GPS, touch screens, space launches and satellites. The equivalent figure in 2016 was 3.6%. In USA, the top 5 tech firms R&D investment is now 10 times bigger than the top 5 defence firms. G. Mulgan (*) UCL, London, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_3
35
36
G. Mulgan
Indeed in 2019, the USA’s top five tech companies spent $106bn on R&D – more than all of the EU’s governments combined. These have, de facto, become decisive in the global governance of many areas of technology, increasingly joined by a small number of Chinese firms, notably Alibaba, Tencent and Huawei. By the mid-2015s, there was growing interest in AI for good and AI for the SDGs, with a series of conferences, programmes and funds. These looked to the role of AI for pest control, matching refugees and job offers, personalised education, health and many other fields. Microsoft, for example, committed several hundred million each year to ‘AI for Good’ projects. A recent survey concluded that there were potential benefits from AI on 42 of the SDG targets (70%) while negative impacts were reported in 20 targets (33%). However, this was a study of potential impacts rather than existing ones, and current ‘AI for Good’ projects are easily criticised as relatively low impact, non- systemic and marginal to the continued drive to develop AI for commercial purposes or military/security ones. Moreover, they often missed the more strategic issues around data.
2 Misalignment of R&D/STI and the SDGs One reason for this is a bigger disconnect between global research and the SDGs. Jeff Hammerbacher, former head of data at Facebook, once commented that ‘The best minds of my generation are thinking about how to make people click ads’. But this is part of a much broader pattern. New research from the STRINGS project (Steering Research and Innovation for Global Goals) shows the degree of misalignment within nations and globally. There are skews in terms of where R&D is done (with the vast majority in middle and high-income countries – 90% of the SDGs-related Science, technology and innovation work is published/patented in high- and upper-middle-income countries), where it is directed (with big skews within each field, such as the well- documented skew in pharmaceuticals towards drugs that require repeat prescriptions and in rich countries, a bias that has been partly remedied in recent decades) and skews in how research is done (with a continued bias towards R&D in big firms, universities, etc. and a relative disregard for more frugal and grassroots models, of the kind that have grown up in Shenzhen, India and East Africa, for example). Using STRINGS data, these misalignments can now be analysed at the country level. We also know that investment is heavily concentrated. The five companies with the largest IP portfolios that involve AI together own 14% of the total IP portfolio related to AI, mainly through patents, and these same companies are also the top investors in R&D. In some sectors, there have been some attempts to redress these imbalances through partnerships, alliances and pooled budgets. CGIAR in agriculture is a striking example, in operation for over 50 years. GAVI – and offshoots like COVAX – attempted a similar shift in pharmaceuticals, recognizing that existing R&D and
Joined Up Thinking on How AI Can Contribute to the SDGs
37
business models impeded the development of effective solutions for the developing world. Digital industries have been slow to create anything comparable, defaulting to cosmetic initiatives. This has also been true of AI and has encouraged the focus on spot solutions rather than systematic shifts to the direction of R&D. However, for AI as for other fields we lack even rough data on how well investment aligns with the SDGs, and commercial activity is becoming more rather than less opaque. In other fields there has been a growing interest in alternative pathways and directions – options for shifting the whole direction of STI, e.g. away from reliance on fossil fuels, or mass-scale agribusiness. Again, there has been much less equivalent work on alternative directions for the 4IR and AI, with a focus instead on regulations and restraints on cross-border data traffic rather than pathways, e.g. towards either more government-controlled systems (using social credit systems); US commercial models based on data harvesting; or models involving more citizen ownership and control of data and more transparency over algorithms. To address these problems, a minimum requirement is: • Better data and analysis of current trends to document where investment is happening, what tasks it is being directed to and where the key gaps are. • Constellations of funders more deliberately aligning funding and support in fields such as AI for public health, AI for education or AI for agriculture. Here, there are useful models to build on which allow for better cooperation between funders and practitioners, including shared data. • Pooled budgets – in relation to both food and vaccines, the world has learned that pooling budgets can greatly increase impact, particularly if public, philanthropic and commercial funding can be integrated. Again, AI is behind best practice. • Shared governance – finally, there will be a growing need for shared rules and governance arrangements either at regional or global levels. Despite some progress with initiatives, such as the GPAI, there has been very little serious action in this respect so far, despite tentative discussion of global charters or rights.
3 AI in the Context of Intelligence for the SDGs The second reframing proposed is to look at how AI can contribute to greater intelligence for the SDGs, rather than focusing too narrowly on individual AI tools.1 Over the last few years, the UNDP and others have developed a way of thinking about how to mobilise multiple forms of intelligence to aid the SDGs including use of data from sensors, satellites or mobile phones and open innovation methods to
For a serious attempt at mapping the links between AI and the SDGs, using expert consultation, see R. Vinuesa, H. Azizpour, I. Leite et al. ‘The role of artificial intelligence in achieving the Sustainable Development Goals’. Nature Communications 11, 233, 2020. 1
38
G. Mulgan
tap into new ideas. These break down the task of innovating around SDGs in a particular place into four main elements: Understanding problems – here the key is to draw on a wide range of sources, from evidence to data of all kinds. AI has a significant role to play in pattern recognition (e.g. pests, disease, mobility), and there are interesting models combining collective intelligence, AI and data. These include Action Insight Data in Uganda, the LICCI project on mobilizing tacit knowledge from farmers to improve climate change models2 and others coping with the broader NLP challenge of dialects and minority languages, or indigenous languages in non-literate cultures (e.g. adjusting BERT and BART for different languages). Another example around human-wildlife conflict is using AI in the modelling of remote sensing data and spatial and temporal characteristics of crop raiding, to predict and map risk. Solution seeking – here the key idea is to look for ideas and answers from a much wider range of sources, whether from business startups or communities affected by problems, inventors or other sectors, making use of tools such as open innovation platforms, challenge prizes, search and recommendation functions, and new forms of citizen science to find a wider range of solutions. Wefarm is a good example that links over a million farmers in East Africa, allows them to post problems by SMS, uses AI to find potential problem-solvers from within the community and then shares this back. In this way, collective intelligence and artificial intelligence support each other. Decision-making – next comes more use of collective intelligence to guide decisions and then help in implementation of policies. One example is the use of the Polis AI tools in democracy to guide debates towards consensus. Learning – finally, there is a continuous learning to make sense of patterns and using evidence sources, such as Microsoft Graph, ‘what works centres’ and platforms to provide feedback and peer learning. A good example of this field is action on corruption, arguably vital for achievement of many other SDGs. There are interesting applications of AI in use, for example, in Mexico, and in some cases collaborations with banks to use AI to spot suspicious behaviour involving officials or politicians. India and South Africa have interesting examples of using AI in tax offices. Seen through this lens AI has many roles to play. But the great majority of serious tasks require combinations of human intelligence and AI and explicit frameworks for weaving different types of intelligence together. This quickly becomes apparent when detailed analysis is done of the potential of AI to contribute to specific SDGs.3
See https://licci.eu/ and https://thrish.org/ See, for example, R. Kwok, AI empowers conservation biology. Nature 567, 133–134 (2019).
2 3
Joined Up Thinking on How AI Can Contribute to the SDGs
39
4 From Supply Push to Demand Pull The aim of these methods is to flip on its head the normal approach to technology. Often new technologies – like ML or blockchain – seek out uses. Their designers tend to become fixated on the method. But an alternative approach starts with what intelligence is needed by actors – whether governments, communities or businesses – and then works backwards to bring together what they need, which is likely to include not only data, interpretation and prediction but also evidence on what works, peer knowledge and so on. This leads to what have been called intelligence assemblies or collective intelligence methods. Here the challenge is that few institutions systematically curate intelligence of this kind. But early work is underway applying this thinking, for example, to oceans, combining digital twins, forecasts and citizen engagement in designing and implementing solutions. Key resources such as Copernicus are beginning to move in this direction too. In all of these cases, the key mindset step is to switch from asking ‘How can AI contribute to the SDGs’ but instead to ask ‘How can we best mobilise intelligence of all kinds to contribute to the SDGs, and what role can AI play within that broader project?’
5 Climate Change, Data and AI Many of the issues discussed so far become very apparent in relation to climate change. Data and modelling have allowed us to know just how much our climate is changing. For decades, the careful collection of weather data and temperatures in the sea has fed models to analyse, predict and explain the effects of human activities on our climate.4 But it remains unclear what role data and models of all kinds will play in solving the crisis. They could play a big role – but only if we achieve some big shifts in how data is managed away from the commercial proprietary models that currently dominate our economies. Digital things often appear good for the climate: if you Zoom to work rather than commuting that saves on emissions. But that’s only half the story. Overall digital and Internet activity accounts for around 3.7% of emissions, about the same as air travel. In the USA data centres account for around 2% of total electricity use. The figures for AI are much worse. According to one estimate, training a machine learning algorithm uses a staggering 626lbs of CO2, five times the lifetime fuel use of a car and 60 times more than a transatlantic flight. Some forecasts expect these levels
See, for example, Jackie Snow, How Artificial Intelligence Can Tackle Climate Change, NAT’L GEOGRAPHIC (July 18, 2019), https://www.nationalgeographic.com/environment/2019/07/artificialintelligence-climate-change [https://perma.cc/4PJ2-HWPR]. 4
40
G. Mulgan
of emissions to rise sharply.5 Blockchain, the technology behind bitcoin, is perhaps the worst offender of all, with extraordinary energy use and climate impacts: Bitcoin alone has a carbon footprint roughly equivalent to New Zealand. Yet AI can be used to cut carbon emissions, with the biggest opportunities in buildings, electricity, transport and farming. Electricity is probably most advanced. It accounts for around a quarter of greenhouse gas emissions and is controlled by relatively few companies with big networks. They’ve learned that AI is particularly good for optimising things like electricity grids that have complex inputs – including the intermittent contribution of renewables like wind power – and complex usage patterns.6 One of Google Deepmind’s projects, for example, aimed to better predict wind patterns, and thus generation of electricity, in the USA. AI can also help with traffic flows or bringing much greater precision to the management of agriculture, through monitoring and predicting weather or pest patterns. But the digital industries have been slow to engage seriously. Apple was for years notoriously uninterested in environmental issues – and like the other hardware firms contributed to mountains of e-waste that result from the pressure to keep up with the latest iPhone, iPad or Galaxy. Facebook has essentially been silent on environmental issues apart from belated actions like its recent support for a Climate Science Information Center. Amazon’s Jeff Bezos announced in 2020 $10bn for environmental groups, having previously been deafeningly silent on the issue. Bill Gates has been more engaged – though with the typical tech view that innovation alone will solve the problems. Microsoft has a somewhat better record, including serious action to handle historic carbon emissions. The comparison with the world of investment is striking. After years of pressure, big investors reluctantly started to measure environmental impacts and carbon and to shift how capital is deployed, pushed by prominent figures like BlackRock’s Larry Fink and former BoE Governor Mark Carney. Investors are still pumping huge sums into carbon intensive industries: but the debate has shifted. So what would it take for data and AI to play a more central role in getting to net zero? The fuel of AI is data and here is the first problem. Most of the data that shows what’s happening energy grids, buildings or transport systems is proprietary, and jealously guarded within companies. To make the most of it and train new generations of AI, it will need to be opened up, standardised and shared. A lot of work is underway on this – including dashboards7 projects like Carbon Tracker using satellite data to map coal emissions and the Icebreaker One project8 that aims to enable AI the Next Big Climate-Change Threat? We Haven’t a Clue, MIT TECH. REV. (July 29, 2019), https://www.technologyreview.com/2019/07/29/663/ai-computing-cloud-computing-microchips [https://perma.cc/5GMR-TQ6R]. 6 David G. Victor, How Artificial Intelligence Will Affect the Future of Energy and Climate, BROOKINGS INST. (Jan. 10, 2019), https://www.brookings.edu/research/how-artificialintelligence-will-affect-the-future-of-energy-and-climate [https://perma.cc/AM3J-DTN8]. 7 https://www.c40knowledgehub.org/s/article/C40-cities-greenhouse-gas-emissions-interactivedashboard?language=en_US 8 https://icebreakerone.org/mission/ 5
Joined Up Thinking on How AI Can Contribute to the SDGs
41
investors to track the full carbon impact of their decisions. But these are still small scale and fragmented, and it will ultimately be political will that opens this data up. If it was organized more as a commons, then it could be used to commission AI that would help whole cities or countries cut their emissions. There’s no shortage of ideas, covered, for example, in a recent overview of uses of machine learning to tackle climate change,9 which gives a flavour of what might be possible, and in sources like the climate change and AI wiki.10 But that just gets us to the next challenge: who will own or govern the data or algorithms? Here there is still a glaring gap. Over the next decade, we may need new and different kinds of data trust11 to curate and share data, sometimes as public private partnerships in fields like transport and energy (e.g. gathering smart meter data) and sometimes as purely public bodies focused on research. The lack of such institutions is one factor why so many smart city projects, like Google’s Sidewalk Labs in Toronto and Replica in Portland, fail, unable to persuade the public that they’re trustworthy. New rules will also be required. As indicated earlier, there is growing interest in global charters of rights around AI. The EU is working on a comprehensive framework for regulating AI, based on assessments of risk, and including bans on some uses such as facial recognition or credit scoring, and China is introducing parallel rules. One option will be to require data sharing – and powers for consumers to share their data with a third party – as a default. Any private entity securing a public license (like provision of a 5G network, Uber or electricity supply or a supermarket getting local planning permission) would be required as a condition of that license to provide relevant data in a suitably standardised, anonymised and machine- readable form. These are just a few of the structural changes now badly needed to build up the digital side of plans to get to net zero and achieve other environmental SDGs.
6 Conclusion There has been a flurry of activity around AI and the SDGs but only limited impact so far. One reason is that most of this is organized in separate and often small-scale projects. We lack good data or analysis of the patterns and gaps. The key lesson of the big platforms is that systematic organization of data – the underlying plumbing – is vital for generating the greatest value from machine learning and other types of AI. We need comparable orchestration of intelligence for the SDGs. And we need increasingly to focus on the broader intelligence needs of the SDGs and to work
Tackling Climate Change with Machine Learning https://arxiv.org/pdf/1906.05433.pdf https://wiki.climatechange.ai/wiki/Welcome_to_the_Climate_Change_AI_Wiki 11 G. Mulgan and V. Straub, The Ecosystem of Trust, https://www.nesta.org.uk/blog/ new-ecosystem-trust/ 9
10
42
G. Mulgan
backwards from these, rather than solely working forwards from available technologies. This more strategic approach is fairly mainstream in business but oddly missing in public and philanthropic efforts. Yet it’s usually wise to focus on the outcome you wish to achieve as well as the potential of a particular technology or tool. This is likely to be key to maximising the contribution of AI to the SDGs during the 2020s.
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community Li Min Ong and Mark Findlay
Abstract Responsible AI design and deployment in alliance with the Sustainable Development Goals needs to be understood in terms of power positioning and vested interests that precede and predetermine the sincerity of ‘AI for social good’. A power analysis is employed to chart the asymmetries of knowledge/information and control enabled by tech companies’ cyberpower, revealing the risks associated with AI technology as another economic dependency regime disproportionately falling on marginalised communities and populations in the Global South. Where the values of tech are misaligned with societies’, this threatens the social and cultural fabric that is vital for resilient societies. The authors introduce the enabling vision of AI in community, proposing to disperse power through the application of AI to contextualise technological sustainability. Power held by Big Tech companies should be dispersed within recipient communities through information sharing and sustainable engagement, so that communities can determine what technology they need for the indigenous purposes they value and prioritise. The notion of safe digital spaces through digital self- determination provides the mechanism for community empowerment. With trusted social bonding at the AI-human interface, AI in community offers a repositioning of tech to serve communities and assist the achievement of the 2030 Agenda for Sustainable Development.
This research is supported by the National Research Foundation, Singapore, under its Emerging Areas Research Projects (EARP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. L. M. Ong (*) · M. Findlay Centre for AI & Data Governance, Yong Pung How School of Law, Singapore Management University, Singapore, Singapore e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_4
43
44
L. M. Ong and M. Findlay
Keywords AI in community · Big Tech · Digital self-determination · Inequality · Power · Techno-colonialism
1 Introduction Our objective in this chapter is to provide a realist’s account of the use of artificial intelligence (AI) technologies for social and economic development. We critically appraise the techno-optimistic narrative that has dominated the global imaginary – with buzz terms like ‘digital transformation’, ‘Industrial 4.0’, ‘pro-innovation’ and ‘big data’ symbolising economic and consequently social progress. Our approach is to employ a power analysis to chart the asymmetries of knowledge/information and control (particularly over data access and transaction) enabled by tech companies’ cyberpower when directed towards vulnerable economies or societies, and to challenge the dominant narrative of progress by injecting the essence of AI in community. As has often been raised by decolonial AI commentators, the analysis to follow implicates who designs the technology, who participates in the process, who deploys the technology obtaining access to the valuable ‘big data’ and who is affected by AI decision-making devoid of recipient engagement. Responsible AI design and deployment in alliance with the Sustainable Development Goals (SDGs) needs to be understood in terms of power positioning and vested interests that precede and predetermine the sincerity of AI for social good. The emerging picture reveals the risks associated with AI technology as another economic dependency regime disproportionately falling on marginalised communities and populations in the Global South. The nature of tech power creates and embeds such relationships of dependency; where the values of tech are misaligned with societies’, this threatens the social and cultural fabric that is vital for resilient societies. For the policymaker reading this chapter, our message is that while AI and data have tremendous potential for humanity, sustainable development and structural inequalities are fundamentally social issues and should be addressed as such. We caution against buying into the techno-solutionist approach promoted by companies without first robustly questioning its implications. Otherwise, ‘AI for social good’, like AI ethics, becomes a mask akin to greenwashing in climate politics. We argue that power held by Big Tech companies, through information sharing and sustainable engagement, should be dispersed within recipient communities, so that communities can determine what technology they need for the indigenous purposes they value and prioritise. The enabling vision of AI in community and concomitant pathways to sustainability are the original contribution offered in this chapter that qualifies the AI and SDGs productive alliance. In conclusion, the analysis offers communitarian options that will enable AI to be part of the solution to SDGs achievement and not, through neoliberal global economic infiltration, a covert impediment.
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
45
2 AI for Sustainable Development: The Hope, the Hype and the Narrative AI technologies are being deployed across many sectors, bringing about tremendous benefits particularly in areas such as in healthcare (diagnostics and disease surveillance) and public administration (e-government service delivery) (Smith and Neupane 2018). AI can even be used in the agriculture sector (crop disease monitoring) and have a role in fighting climate change through climate modelling and measuring carbon emissions (Kaack et al. 2020). Given its potential widespread application, there is a lot of hope invested in AI for social good, including in helping to achieve the UN SDGs. Early studies however have already shown that this optimism should be tempered as the potential impacts of AI on sustainable development can be both positive and negative (Vinuesa et al. 2020). Of concern in this chapter is the discriminatory frame in which AI is situated, wherein vulnerable and marginalised communities are at higher risk of the negative impact of AI deployment (Loo et al. 2021). This has also been recognised by the World Bank, which in its 2021 development report emphasised that ‘Major inequities in the ability to produce, utilize, and profit from data can be found across both rich and poor countries and among the rich and poor people within them’ and that the ‘voice of low-income countries needs to be heard in the global debate on data governance’ (World Bank 2021). Yet, ‘AI for social good’ is a banner held up by tech companies: AI as the means for tackling world challenges of pandemics, climate change and humanitarian crises.1 ‘Pro-innovation’ is the response by governments worldwide as a global AI race is underway, but while governments may acknowledge the potential risks and harms of AI technologies alongside its benefits, they have failed to elucidate a tight understanding of a ‘good AI society’ that connects human responsibility, cooperation and values (Cath et al. 2018). This consequence is symptomatic of techno-optimism, a hype or blind faith placed in technology, espoused by the powerful players in the discourse, crowding out alternative innovation pathways that might meet the needs of communities (STEPS Centre 2010). This analysis is not a blanket social critique of AI technology nor its potential for achieving good. Data that services and is managed through AI can be immensely useful and critical for identifying inequalities in society. Rather, the analysis targets a growing power asymmetry underpinning AI global expansion, what some might describe more critically as a hegemonic project of Big Tech companies (Couldry and Mejias 2021b; Whittaker 2021). If power accumulates and is concentrated in the few tech giants (typically in Silicon Valley but also in China) which are multinational organisations – it is hard not to imagine that they would assert their economic The Big Tech companies have their own programmes: see, for example, Facebook (https://dataforgood.facebook.com/), Google (https://ai.google/social-good), Microsoft (https://www.microsoft.com/en-us/ai/ai-for-good) and Intel (https://www.intel.com/content/www/us/en/ artificial-intelligence/ai4socialgood.html). 1
46
L. M. Ong and M. Findlay
and political influence and priorities internationally primarily for market benefit. This directly challenges the sustainable development goals, particularly when lowand middle-income countries are in a relatively vulnerable position for exploitation through an expansion of economic or technological dependencies. It is indisputable that the Global South is lagging behind the Global North in its readiness for AI implementation (Oxford Insights 2020), and with global AI expansionism, developing countries arguably value affordability and accessibility over any AI provider’s social ideology (Unver 2021). Eradicating inequality and fostering global cooperation are priorities of the SDGs, especially as the world has struggled to cope with Covid-19 and climate catastrophe looms. To achieve the SDGs in a sustainable way, a highly contextual approach is required – countries need to be able to define for themselves what is development and progress, aligned in terms of what they value to retain and conserve in their domestic sphere. The power asymmetry we have outlined however threatens to challenge any locally engaged and owned 2030 Agenda. if we continue blindly forward, we should expect to see increased inequality alongside economic disruption, social unrest, and in some cases, political instability, with the technologically disadvantaged and underrepresented faring the worst. (Smith and Neupane 2018)
3 Power, Inequality and Techno-solutionism 3.1 Power It is impossible within this space to examine at length the dimensions of power held by tech companies when compared with vulnerable recipient economies or societies. Tim Jordan’s theory of cyberpower provides a useful conceptual shorthand, propagating cyberpower as the form of power structuring culture and politics in cyberspace and the Internet (Jordan 1999). Within this framework Jordan examined three dimensions of power: (i) power over the individual, (ii) power over the social and (iii) power over the virtual imaginary. In this chapter, we focus on the second. Important for our purposes is the nature of cyberpower (or technopower) and how it creates relationships of dependency in the social. This was described by Jordan in the following terms (Jordan 1999): • First, technologies supporting cyberspace are constructed according to certain social values but appear as things for use. The oscillation between social values and things creates a power force. • The structure of technopower is an ongoing spiral: As more information is generated in cyberspace, leading to information overload, this stimulates demand for more tech tools. Offline societies then increasingly depend on and are affected by these tools (cyberspace is the informational space of flows providing essential services to informational socio-economies).
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
47
• This technopower spiral means greater freedom of action is afforded to tech people in the context of increasing tech complexity, creating a cyber-elite that dominates individuals’ choices. In short, ‘Cyberpower of the social is structured by the technopower spiral and the informational space of flows and results in the virtual elite’ (Jordan 1999). This informational space is also profoundly unbalanced in power terms. Those stakeholders operating with information deficit yield up power to the sources and proliferators of data and information power. In the same vein, power accumulates in Big Tech as countries’ information socio-economies mature and become reliant on AI technologies; ‘as the scale of such reliance increases, so will the impact of AI technologies on our shared values’ (Cath et al. 2018). For instance, those developing predictive models are ‘bestowed with the power to decide what “correct” is’ in relation to social constructs like ‘good health’ and ‘good eating habits’ (Birhane 2020), and leaked documents have shown the adverse impact of social media on teenagers’ mental health and wellbeing (Chappell 2021). The global pandemic had shed light on the extent of this reliance: countries worldwide depended on Google-Apple’s contact-tracing technology (Sharon 2020), while consumers turned to Amazon for hand sanitizer, face masks and disinfectants (Palmer 2020) and governments like the UK and Canada relied on Amazon to distribute home testing kits and medical equipment (Liu 2020). Through its saturation and reach of massive social media platforms, AI-assisted information tech can recreate meaning and priorities in the information transfer they enable, overriding communities’ values (such as what amounts to offensive content) – values which often imbue a complex and nuanced understanding informed by the respective culture of communities. As explained in the discussion of AI in community below, AI-assisted information platforms and technologies can either empower or disempower community-cherished meanings. Social media’s profit incentive plays out on at least two levels: at a more granular level over-moderating or under-moderating content based on what generates revenue, on the next level, magnifying its negative effects in the Global South – for instance, in Afghanistan and Myanmar, Facebook’s systemic lack of language support has allowed extremist language to flourish (Ortutay 2021).
3.2 Design and Deployment of AI and Social Inequality It is already largely recognised that digital divides and information asymmetries exist between and within countries, and across various social groups (e.g. UN Secretary-General 2020). This has entered the mainstream discourse, particularly as the pandemic has also meant increasing reliance on the Internet for online education. However, unequal access to the technology and Internet access has exacerbated discrimination in fundamental areas that are the concern of the SDGs. AI potentially exacerbates pre-existing vulnerabilities and social inequities founded on
48
L. M. Ong and M. Findlay
stark differentials in access to technology. AI also creates new realms of discrimination through data capture and claims of ownership and sovereignty. AI-assisted tech surveillance in Covid-19 control policies, for instance, has advantaged technologically empowered communities, while the most vulnerable have borne the brunt of oppressive personal restrictions and invasive AI applications (Loo et al. 2021). At the initial design stage, social inequalities can be exacerbated when certain social groups and communities are ignored and excluded from determining the purposes and priorities for designing technology. For instance, children with disabilities may face several barriers to taking advantage of educational opportunities enabled by information communication technologies; such technologies and the content they service and manage may need to be adapted for their specific use (UN Secretary-General 2020). Even if certain technologies were designed with more universal and tailored access and inclusion in mind, pre-existing social inequalities around affordability could mean that technological systems instead perpetuate exclusion, if contextual impediments to access are also not factored into design and deployment. Another example of potential exacerbation of structural discrimination is with digitised identification systems. While the systems might recognise the importance of enhancing the inclusion of marginalised people, cost barriers and complex paperwork could prevent the poor from getting registered, and women facing legal or customary barriers to obtaining identification may fall through the registration cracks (UN Secretary-General 2020). At the application and deployment stage, AI-assisted technology could consciously or inadvertently worsen social challenges. In the humanitarian sector, it has been argued that digital technology and data practices facilitate power asymmetries and recreate colonial relationships of dependency (Madianou 2019); the use of AI-assisted technology has also created additional barriers to claiming asylum (Ong 2021). Where ageism is an issue, deployment contexts raise challenging social questions. For instance, while the use of robots in healthcare can create efficiencies and address manpower shortages, over-reliance of care robots for the elderly may instead create conditions of seclusion and undermine their dignity (UN Secretary- General 2020). The social lens is therefore crucial for understanding AI technology’s potential for sustainable development, particularly when focused on data and classification, so essential for the operation of machine learning in AI. AI is a computing tool which heavily depends on data, but where data is essentially about human dignity and life experience, such data also often represent and build on structural inequalities related to gender, race or class (MacKenzie and Wajcman 1999) (cited in Joyce et al. 2021). Consequently, AI technology could be understood ‘as a mirror for social structures’, one which has reproducing and amplifying qualities (Larsson 2019). Further, ‘big data’ (and associated mass data sharing potentials) also masks disparities in power among social groups and regions of the world (Sapignoli 2021). Conversely, data that are missing, incomplete or prone to error are not always sufficiently factored in to AI-based solutions and predictions (Sapignoli 2021). Recognising that data classification is not a neutral exercise, it has been argued to be
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
49
a ‘deeply moral project often implicated in social stratification’ (Bowker and Star 1999; Fourcade and Healy 2013; Thompson 2016) (cited in Joyce et al. 2021). Examining the market environment in which Big Tech companies operate, their own business models will suggest that these companies are not incentivised for human development, as illustrated by Zuboff’s concept of surveillance capitalism (Zuboff 2019). Translating this into addressing equality and non-discrimination, it could be argued that profit-driven logics by their nature target the masses and therefore by its market trajectory Big Tech cannot be expected to address inequities (e.g. Birhane 2020), through say customised solutions addressing the needs of the minoritised poor who exist outside profitable market predictions. Bearing this in mind, as Whittaker has said, by paying attention to racial capitalism and structural racism, tech critique can move beyond shallow notions of bias to an examination of the centralising power of Big Tech (Whittaker 2021). An examination of this AI divide, the ‘gap between those who have the ability to design and deploy AI applications, and those who do not’ (Smith and Neupane 2018), is useful because power asymmetries are at the heart of discriminatory inequalities. Power is relative as is equality – achieving equality is therefore through power dispersal and balance. Consequently, when Big Tech companies propound techno-solutionism (Katzenbach 2021), in that tech can solve complex social problems deeply embedded in history and traditional neo-colonial contexts, this needs to be robustly questioned. To achieve equality, data is needed to analyse and monitor the differentiated impacts of new technologies; AI can catalyse this process. More importantly, as stated in the UN Secretary-General’s report, access to new technologies ‘needs to be accompanied by measures to promote and protect economic, social and cultural rights, with a specific emphasis on poor and marginalized people to empower them and build their capacity to take full advantage of those technologies’ (UN Secretary- General 2020). Concepts such as participatory design, co-design and ‘design at the margins’ are useful illustrations of communitarian engagement which AI in community embraces. As will be revealed later, AI in community is a levelling up project. If it is recognised (and actioned upon) that firstly, community priorities must motivate AI design and deployment, and that secondly, the social bonds essential for sustainable communities can incorporate AI, the logical conclusion is that technopower needs to be dispersed across human recipients within those community relationships. In the next section, we look at the growing concerns in the scholarship of digital and techno-colonialism, before setting out the enabling vision of AI in community and concomitant pathways to sustainability.
4 Tech Companies Heralding a New Digital Colonialism? The scholarship on digital and techno-colonialism is useful to this chapter’s power analysis as it illustrates not only how technopower can create and exacerbate inequalities, but it also informs through a historical lens how a new global social
50
L. M. Ong and M. Findlay
order might be created and shaped by the techno-elite. It has been warned that developing countries are in a precarious position from specific risks in the deployment of AI – they are more vulnerable to disinformation, inequalities and human rights violations (Pisa and Polcari 2019) (cited in Victor Manuel Muñoz et al. 2021). Within this frame, initiatives by tech giants to provide Internet access in the Global South may not be considered so much benevolent initiatives for sustainable development, but rather techno-colonialism through creating relationships of dependency over the ‘Next Billion Users’ (Birhane 2020). Digital colonialism has been postulated by Kwet as enabled at the architecture level of the digital ecosystem, through the centralised ownership and control of its key pillars: software (code is law (e.g. Susskind 2018)), hardware and network connectivity (Kwet 2019). Consequently, Big Tech corporations also control computer- mediated experiences, ‘giving them direct power over political, economic, and cultural domains of life’ (Kwet 2019). Facebook’s Free Basics service, for example, could be a tool used to undermine local information sovereignty in the Global South as the tech giants through control of critical information infrastructure have ‘the power to regulate the press, speech, and association in foreign territories, as they see fit’ (Kwet 2019). Data colonialism has been identified as ‘an emerging order for the appropriation of human life so that data can be continuously extracted from it for profit’ (Couldry and Mejias 2021b). Thereby, human life is ‘annexed’ to capitalism (Couldry and Mejias 2021b; Zuboff 2019) in the most fundamental data transactions, enabling data extractivism in the Global South to fuel Global North economies (Freuler 2019). The general consensus among AI decolonial authors is that the structural legacy of colonialism lives on in terms of power, race and knowledge and that digital or data colonialism has the same function as its historical colonialism: to dispossess. The trajectory is that of exploring, expansion, exploitation and finally, extermination (in the form of race, class violence) (Couldry and Mejias 2021a). Notably, researchers within Big Tech have also raised these concerns (Mohamed et al. 2020). Consequently, the move by Big Tech companies to launch exclusive submarine Internet cables is met with intense criticism (Freuler 2019). Content providers like Microsoft, Google, Facebook and Amazon now own or lease more than half of the undersea bandwidth (Satariano et al. 2019). Further, the political and social impacts of this digital transformation or tech colonialism are likely to occur quickly, given the rapidly evolving nature of digital technologies (Sahbaz 2019). From a sustainable development standpoint, the import of technology by techno- elites into the Global South without contextualisation can be particularly harmful for technologically fragile communities as it means that the values and ideals of the techno-elites are enforced through the mysticism of technological superiority (Birhane 2020). Arthur Gwagwa illustrates how this could threaten the social fabric of African communities: traditional social gatherings during the harvest could be disrupted by the advent of automation in agriculture or food delivery apps (Gwagwa et al. 2021).
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
51
4.1 Relationships of Dependency Affecting Vulnerable Communities The scholarship on techno-colonialism also sheds light on the relationships of dependencies created in vulnerable communities where AI is deployed. For example, with Facebook (now Meta) providing free access to a number of websites including job boards and healthcare portals in Africa through its Free Basics initiative, Facebook has now become the Internet for many Africans (Ajayi 2021). Despite reactive campaigns such as #DeleteFacebook, arguably only those in privileged positions in society can afford to bypass the free access services – for people particularly in marginalised groups such as the disabled, ‘social media is a lifeline – a bridge to a new community, a route to employment, a way to tackle isolation’ (Ryan 2018). In this way, power creates inequitable relationships of dominion over dependency and obligation. AI deployment into ill-prepared and under-resourced social and economic community locations, without conscious countermeasures to ensure power displacement, will perpetuate a currently existing technological divide. Domination of the digital ecosystem allows tech companies to maintain ownership and control of the data society and build dependency into vulnerable communities (Kwet 2019). It could simultaneously impoverish the development of local products in the Global South (Birhane 2020), for most developing countries have limited, if any, leverage over large Internet companies, given their small size and low income per capita (Pisa 2019). Differential data collection capacity among countries can also exacerbate the AI divide, as it makes developing countries less competitive and lead to potential monopolisation of AI technologies (Muñoz et al. 2021). It has even been argued that reliance on Big Tech can be seen in academia and research (Whittaker 2021). Big data also enables power centralisation: they include decision-making protocols favouring the techno-elite and are as such implicated in global economic and social inequality (Noble 2018) (cited in Joyce et al. 2021). What the relationships of dependencies enabled by technopower also show is a state of embeddedness of power. Its ‘capture of the commons’, argued Kate Crawford, has been enabled by myths such as data collection as a benevolent practice, which obscures Big Tech’s operations of power and their consequences (Crawford 2021). This means that certain technologies could be embedded in societies even before questions surrounding who benefits from what sorts of innovation are asked, let alone answered (Stilgoe 2019). The global AI regulatory infrastructure is poorly equipped to address insidious discrimination and the exacerbation of power divides. Based currently as it is on ‘top-down’ ethical compliance models, whose self-regulatory reach is in the hands of Big Tech, it would be difficult even for the most egalitarian and aware sponsors of AI and data expansionism to factor in countermeasures that negatively influence their profitability. A more impactful regulatory frame, as is argued later, involves a recognition of the value of data generated in vulnerable economies and, from this
52
L. M. Ong and M. Findlay
position, a conscious repositioning of data management into the hands of communities.
4.2 Problem with AI Ethics Top-down, ungoverned self-regulation is the current AI regulatory model, operating in societies and economies where neoliberal individualism tags AI to economic growth (Bughin et al. 2018; Szczepański 2019). This style of ungoverned self- regulation complements the imperial intentions of Big Tech by enabling an appearance of responsible design and deployment while at the same time avoiding genuine external accountability and explainability to vulnerable recipient populations and economies. Further, this allows Big Tech to ‘co-opt and neutralise’ critique against them by not only denigrating research they find threatening, but by funding their weakest critics, ‘often institutions and coalitions that focus on so-called AI ethics’ (Whittaker 2021). In lower-income economies where often communal interests prevail, understandings of regulation through ethical principles may vary from those espoused in the current AI ethics model (Findlay and Wong 2022). While Big Tech has recently called for regulation of their sector, this need not necessarily come from altruistic intentions. As Jack Stilgoe puts it: The optimistic version is that there is a reputational benefit that companies can get from being ahead of the curve, [t]he pessimistic account would be that they are in reputational trouble and that they need to take proactive measures. (Condliffe 2020)
Further, Big Tech can steer the course of regulation by perpetuating the narrative that one cannot set the rules in regulation unless one understands the technology, which Stilgoe says is how regulation is taken ‘out of the democratic domain and put it in the technocratic domain’ (Condliffe 2020). Thus, given the pervasiveness of AI technologies across sectors, some authors have even called for it to be recognised as a public utility (Canazza 2016; Liu 2020). Having illustrated to this point how the themes of power, inequality and techno- colonialism interconnect, the next section looks at how the questions of innovation are defined and driven, in the exploration of pathways to sustainability.
5 Pathways to Sustainability 5.1 Defining Innovation The earlier sections on power and techno-colonialism demonstrate that if technopreneurs have control over defining both the problems and the solutions, starting conversations on innovation with technologies rather than the problems they are meant to address (Stilgoe 2019), this could put tech development and deployment
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
53
on a divergent trajectory from sustainable development. The already-identified power dynamics shaping the direction of innovation could lead to the closure of alternative deployment and application pathways, excluding marginalised groups in vulnerable societies. Unsurprisingly the negative impacts of technology tend to impact disproportionately on marginalised communities (STEPS Centre 2017). While the range of pathways for the development of technologies can be wide, commercial players (and others) push for an ‘optimal’ solution that best represents their economic interests, no matter what might be ancillary objectives, obscuring alternatives (STEPS Centre 2010). This underscores the importance of contextual approaches to innovation, ensuring that innovation respects the needs of the recipient community and that progress enabled by tech is aligned with the community’s vision. So understood, ‘high tech’ will not be the best solution in every situation of social and economic need, if unsustainable dependencies accompany such transitions. For instance, in order to include marginalised communities in online social or education initiatives, content creators may need to ensure that their html code is built for slower speed connectivity (Armour 1997), despite the desire for cutting- edge universalism. Therefore, pathways to sustainability involve challenging narratives that ‘frame issues of tech power and dominance as abstract governance questions that take the tech industry’s current form as a given and AI’s proliferation as inevitable’ (Whittaker 2021). As the Social, Technological and Environmental Pathways to Sustainability (STEPS) Centre emphasised, the direction of innovation matters because ‘it shapes the distribution of benefits, costs and risks from innovation’ (STEPS Centre 2010). In this section, we borrow heavily from the research work of the STEPS Centre and draw together the case for power dispersal and grassroots innovation.
5.2 Theories and Systems Change In Transformative Pathways to Sustainability, a recent work done by the STEPS Centre and their partners, the authors concluded that ‘in policy contexts, narratives that appear to reduce uncertainty tend to be favoured and become dominant, even if they are inaccurate, perhaps because they can lead to clearer plans for action (Roe 1994)’ (cited in Marshall et al. 2021). Thus, the framing of sustainability challenges can ‘look entirely different depending on the perspective from which they are viewed, recognising the social interactions and politics of knowledge that impact on that perspective’ (Marshall et al. 2021). Visiting a theory of change can help to influence what is feasible in particular contexts of transition such as technological expansionism, thereby ‘leading to amendments and guiding future interventions and initiatives’ (Marshall et al. 2021). Theories of change in the scholarship can be classified into two heads (Ely 2021):
54
L. M. Ong and M. Findlay
• Literature focussing on socio-technical system transitions, especially those adopting a transition management perspective (Grin et al. 2010), whereby the government is a central actor to bring about changes in a way that fosters key sustainability objectives – in other words, controlled transitions. • In contrast, transformations may be seen as ‘more plural, emergent and unruly political re-alignments, involving social and technological innovations driven by diversely incommensurable knowledges, challenging incumbent structures and pursuing contending (even unknown) ends’ (Stirling 2015). Under this perspective, ‘the role of government is less central, and greater agency is attributed to civil society’ (Ely 2021). The discussion of AI in community which completes this brief review advocates community empowerment in such processes of fundamental transition, where trusted social bonds that sustain healthy communities can incorporate technological incursion without sacrificing communal and individual integrity. To achieve this outcome, a more equitable power dispersal must accompany technological expansionism so that any unruly and unpredicted disruptions visited through technological advance can be subsumed within a prevailing consciousness that technology is just another pillar for community inclusion in development agendas. Our concept of AI in community and digital self-determination (both outlined below) aligns with the latter approach to transformation, observed by Ely. Drawing from the UNDP’s Human Development Report 2020, ‘We must critically examine the crucible of human values and institutions – specifically the way power is distributed and wielded – to accelerate implementation of the 2030 Agenda for Sustainable Development for people and planet’ (UNDP 2020). Further, human development is urgently required to navigate challenges like the Anthropocene – ‘humanity can develop the capabilities, agency and values to act by enhancing equity, fostering innovation and instilling a sense of stewardship of nature’ (UNDP 2020). Communities must play a central role in how AI and SDGs align for their benefit. While government involvement has its place, as many states which will be targets for SDG advancement are dysfunctional, focussing on communities is more likely to distribute the benefits of innovation in a more equitable manner (recognising sensitive social stratas at the local level) and bulwark against the risks of technological discrimination which tend to fall disproportionally upon marginalised groups, to ensure safe digital spaces for all in which to learn about how to enable technology for indigenous needs. Further, while individuals may not be ‘the locus of truth, values or culture’ (Sloane and Moss 2019), communities built over time and generations can more readily prioritise values that need embedding, if these communities are offered safe digital spaces in which to congregate and communicate. There is insufficient space here to develop the mechanics of community empowerment and activism beyond model assertions, but ethnographic and anthropological traditions have long revealed the communication and assimilation networks that thrive even in the simpler community frames. What then is the pathway to sustainability where technological transition in vulnerable societies or communities is concerned? How can we integrate innovation,
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
55
technology and sustainability? While there is no one way to achieve this, we borrow inspiration from the STEPS Centre’s 3D innovation agenda, which embrace three pillars: directionality (of pathways towards specific sustainability objectives), distribution (more equitable distribution of benefits, costs and risks associated with innovation) and diversity (in socio-technical systems, in order to mitigate lock-in, build robust and resilient systems and cater for seemingly irreconcilable perspectives on value and sustainability) (STEPS Centre 2010). As has been raised by others, the imaginary is crucial to inspire communities – we need a positive vision to inspire systems change. Concepts like storytelling, deep listening and knowledge co-production will be relevant for social sustainability in any digital transformation that has community enhancement as a driver.
5.3 The Role of AI in Solutions Once the risk of power dependency is front and centre, and recognition is given to the commercial imperative that AI development is a market endeavour with wealth creation at its heart, the location of AI in the community can stimulate a profound reconfiguration of this new technological epoch. This chapter argues that through a process of community empowerment employing AI as a tool for economic and social power dispersal, we can achieve SDG 10. While tech is power, so are education and communication (Jordan 1999). Along with these societal essentials for embedding technology is the significance of personal and communal data as an inducement for the promoters of AI to advance community inclusion and sustainability above dependency when it comes to tech advancement for social good. Not only are specific communal considerations essential for sustainable alliances between AI and the SDGs, recognition of the compatible positioning of global community interests is also essential and will require critical re-appraisal on neoliberal economic imperatives. Globalisation and technology have been captured by the forces of neoliberal exclusionism that has produced an anxious and divided world devoid of promise (Findlay 2021). Populist politics often shields neoliberal excess behind a masked attack on globalisation and internationalism when in fact without globalised engagement directed against neoliberal exceptionalism, the achievement of the SDGs would be unlikely. Globalised AI2 and community-empowered data management (which will be developed later in the discussion on digital self- determination) can offer responsible engagement between data stakeholders at the local-global interface and span profound divides currently retaining the value and
Global engagement provides the possibility to bridge (or maybe close) the AI divide if that engagement is premised on the SDGs, equitable interaction and not dependency relationships which are inherent in a neoliberal exclusionist notion of globalisation. Simply, power imbalances can be exacerbated if globalisation is inextricably promoting wealth for the few. On the other hand, globalisation engagement grounded in the SDG ideology will expose neoliberal imbalance. 2
56
L. M. Ong and M. Findlay
valuing of AI as a North world domain, and data as the South world’s bargaining chips. Whether or not AI is a tool of neo-colonialists (the warning is useful), the tools themselves can be used for positive social purposes, bridging inequalities. Accepting the relevance of the metaphor ‘the Master’s tools can be used to break the Master’s house’ (Couldry and Mejias 2021a), the same tools thus can be used for activism, such as on social media platforms and information looping to open up understandings of the scope of secondary data usage. With information asymmetries fundamentally impeding actual and empowered community engagement, the larger context for community repositioning of the technological locus is recognising the limits of knowledge including that of the challenges (STEPS Centre 2017). This builds the case for power dispersal towards communities, so that not only will communities have access to data, but that accountability and transparency in the operation of AI in specific communities will be improved to make sure that once communities are in possession or have access to the location, use and value of their data, such practical awareness would benefit the underrepresented and not just the elites (Gwagwa et al. 2021, 3). Thus, limits of knowledge notwithstanding, building robust and resilient systems of technological application and engagement in this way is key to achieving sustainability objectives for any proposed AI-SDG alliance. With this power dispersal, access to quality data and use of various technological and AI developments would hold potential benefits to the Global South in general (Birhane 2020). Ideas like the cooperative ownership of AI by users (Scholz 2016) and democratisation of AI3 can stimulate this conversation. Among other things, AI technologies need to be affordable, user-friendly and explicable, robust and resilient and capable of timely employment and provide solutions from which the participants can draw informed choices (Findlay 2020). Having revealed AI’s divisive and reconciliatory potentials, the following sections locate on communities in which AI can be deployed to promote trusted social bonds for tackling inequality through greater access to essential social data. What becomes apparent in our analysis: AI can be part of the solution as well as the problem when approaching power asymmetries in the global economy.
6 Digital Self-Determination In the final sections of this brief coverage, the analysis presents a snapshot of two concepts: digital self-determination and AI in community. Both are crucial for the essential power dispersal which will guard against AI design and deployment ‘AI democratisation means making it possible for everyone to create artificial intelligence systems… potentially without requiring advanced mathematical and computing skills’ (Mark Riedl, Georgia Tech, quoted in https://eandt.theiet.org/content/articles/2021/11/access-for-all-thedemocratisation-of-ai/). 3
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
57
proliferating dependency relationships that challenge the SDGs, and for repositioning vulnerable human recipients (data subjects), and their communities, to better bargain for the responsible use of data and the sustainable application of AI technologies. Both concepts are complex but can also be simply understood. Digital self-determination essentially requires the creation of safe digital spaces where (i) data subjects and their communities can be empowered to access and manage their data, while (ii) market players who are interested in the data approach it with genuine respect for data subjects’ dignity (Remolina and Findlay 2021). This environment for data use may sound aspirational but as developments in open finance with data portability at their core have revealed, big data harvesters, and those who otherwise claim data ownership, are becoming more amenable to access regimes that are less contested and conflictual (e.g. Remolina 2019). Digital self-determination does not follow the language of data rights, data ownership nor data sovereignty. Instead, it offers a different contextual mode of data governance that recognises that data are principally messages from humans to humans and as such represent the digitising of life experience. As such, this provides the potential for integrating different types of knowledge, including social and cultural elements, into technology development, so that once again technology respects rather than overrides human values and judgement (Sloane and Moss 2019). Without the space to detail how safe digital spaces offer possibilities for data subjects and their specific communities to manage and transact their data in particular transactional contexts (market or social), it is sufficient to see this as a conscious dispersal over the power that data control offers. From a market perspective, one reason why Big Tech might pragmatically be attracted to an alliance with the SDGs Agenda is the prospect of massive data pools that can be tapped under the guise of social good. However, if digital self- determination is the governing regime which Big Tech (and its market expansion imperative) is subjected to, then the human essence and dignity of data for AI-assisted technologies will be preserved. Data will not be extracted out of its human context and alienated from data subjects and communities during the data gathering and classification stages, for example. Consequently, the power of the data subject and his/her bargaining position in the market is elevated simply through the recognition of their prominent position in data transactions. This vision is realised through the creation of safe data spaces where personal data can be easily and communally managed, empowering data subjects and their data communities. Underpinning digital self-determination is the necessity that data subjects should be made aware of who uses their data, when and how. The commodification of secondary data has massive market potential but unregulated it can deeply disempower data subjects and their communities (Choo and Findlay 2021). In our centre’s earlier work we have proposed regulatory models which are inclusive and participatory, wherein vulnerable stakeholders such as data subjects can be empowered through AI-information looping to better position their interests in the commodification of their data, which after all is a massive profit motivation for AI expansionism (Findlay and Seah 2020). Translated into the context of vulnerable economies or societies, if the value of ‘data gold’ is more fairly distributed across data ecosystems, then
58
L. M. Ong and M. Findlay
benefits can flow to all; the marketising of personal data can proceed but only insofar as it is compatible with the priorities and purposes of data subjects and their communities. Digital self-determination is gaining traction both as an ethical pathway for data emancipation and a responsible process for data access. If it is built in as a precondition for Big Tech participating in AI expansionism, then a more communitarian agenda for AI deployment and data use becomes possible.
7 AI in Community for Communal Empowerment AI in community is then proposed as the deployment context that minimises negative consequences of tech rollout in vulnerable economies or societies by giving priority to human recipient communities and fostering trustworthy human-AI relationships. Power dispersal is made possible by conceiving AI as a partner in community relationships which sustain equitable social bonds. By providing the resources and capacity, AI can facilitate development without decimating the social fabric which makes communities resilient in times of global crises. The concept of AI in community relies on three assumptions: • Sustainable communities are bonded through relationships of trust. • Individuals within these communities as recipients, as well as active participants as empowered by digital self-determination, of AI can create relationships with the technology through the embodiment of the intentions of those who design and deploy the technology (Findlay and Wong 2021). • Such communities and relationships are chosen by their members. In these straightforward statements reside many complexities (drawing from Cotterrell 1997) which require elaboration if an active understanding of AI in community is to be fully appreciated. That said, it remains possible to employ the concept for the purposes of arguing a communitarian location, purpose and responsibility for AI. AI in community endeavours to bridge the divide between design and deployment using trust. It should not be assumed that AI in community reflects any discourse about trustworthy AI or technology, or the certification of safety standards, even though these may complement the initiation and maintenance of trust, along with other more empathetic variables on which relationships of trust rely. AI in community also encompasses the data on which AI technology depends. Data subjects and their communities require safe digital spaces within those communities to contemplate and foster relationships of trust founded on informed access and management of their data. Much of the alienation that currently exists between humans and AI technology can be traced back to fundamental information deficits suffered by data subjects when confronting the purposes and priorities of AI applications.
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
59
Proposed in this manner there seems little to argue against AI in community, until the hegemony of AI and big data exploitation is confronted. One of the reasons why ethics as a regulatory model is well supported by Big Tech is revealed in the expectation that some form of ethical compliance will produce trust in recipient communities without the sponsors of AI technology and the harvesters of big data having to divest any of their control and power over design, deployment and exploitation. In this model, AI and big data remain attached to market imperatives, removed from communitarian priorities. AI in community is more than a physical repositioning; it is a repositioning of priorities, purposes and values. If it were limited to the former, the power asymmetries currently underpinning AI-community interaction would not necessarily be confronted. As long as individual recipients and their communities are viewed by AI sponsors through a neoliberal market lens, as clients, customers and consumers, and as data raw material for extractive market value, the purposes and priorities of the community will be negotiated in market terms, with profit as an inextricable determinant of deployment. The essence of AI in community therefore requires a reversal of perspective. AI is no longer viewed as something which is transplanted into communities based on some external measure of benefit. Rather, AI in community envisages that individual and collective recipients will determine whether (or not) AI applications fulfil their purposes and priorities. The manner in which such determinations are made will depend on the community members concerned and their need(s) to which AI and big data could be directed. In recognition of knowledge limitations (Sect. 5.3), communities may perhaps seek the assistance of technically competent advice (or other options), but the choice remains with the community. Trust comes into play when a community need has been identified and technical solutions and deployment contexts or applications have been chosen. AI in community requires that the community’s trust be informed, genuine and sustainable, even if an AI application is accompanied by robust compliance to ethics. For such trust to be generated and maintained, the community must be able to comprehend and discriminate the functionality of AI and big data use (although this does not mean that the community should trust technology beyond their understanding). Therefore, the purposes and priorities behind what AI does are set by communities and the responsibility to follow that through then rests with AI designers and deployers to merit trust through the achievement of determined purposes and priorities. AI in community is as such a recognition of the active role of technology and data in the creation and maintenance of trusting social bonds with the community. In other works on smart cities and storytelling, we propose the example of how AI-assisted information technologies and communal, open data access can facilitate, curate, communicate and conserve the oral histories that are at the heart of neighbourhood identity (Findlay and Ong 2022). In this example, by grounding technology in the community, cultural storytelling can be embedded within the urban knowledge infrastructure through communication and information pathways enabled by AI technology and open data. In this way, AI and big data become
60
L. M. Ong and M. Findlay
partners with the community to help strengthen its social fabric through historical understanding of neighbourhoods as they transition in urban development. We propose that AI in community stimulates a positive imaginary of AI for sustainable development and that this vision is possible with the intrinsic capacity of humans to cooperate through communities. The human impulse to build relationships with others and to form communities is an inherent driver of the success of new digital technologies. (Rheingold 2000)
8 Conclusion As shown in this chapter, AI poses genuine risks associated with technological dependencies, especially to vulnerable middle- and low-income economies or societies. Global technopower is a domineering force described by some as colonialism. When the benefits to these marginalised economies or societies of such deployment are said to override these debilitating risks, but in fact are differentially tied to profit- driven deployment outcomes redolent in global trading and wealth disparities, then sustainability is even more fragile (Smith and Neupane 2018). What needs constant underscoring is that as technologies affect different people in different ways, the design and application of new technologies will need to take into account individual needs and unique deployment context (UN Secretary-General 2020, para. 21). This task is urgent, as it is not just the rights and dignity of individuals at stake but the social cohesion of communities. Our proposal is to disperse power through the application of AI to contextualise technological sustainability. More specifically, we employ the notion of safe digital spaces through digital self-determination to provide the mechanism for community empowerment. Having demonstrated the potential of digital self-determination and revealed this through a particular application of AI in community, we have shown that AI has the power to assist social sustainability. Although space does not permit a full rehearsal of our work on AI in community and trusted social bonding at the AI-human interface, we emphasise that AI in community offers a repositioning of tech to serve communities and assist in the achievement of the 2030 Agenda for Sustainable Development.
References Ajayi, Rosemary. 2021. Deleting Facebook: In Africa, It’s Not so Simple. The Continent, November 27. https://media.mg.co.za/wp-media/2021/11/93a15cc3-thecontinentissue66v1.pdf. Armour, Polly Jean. 1997. Standing Stones in Cyberspace: The Oneida Indian Nation’s Territory on the Web. Cultural Survival Quarterly Magazine, December. h t t p s : / / w w w. c u l t u r a l s u r v iva l . o rg / p u b l i c a t i o n s / c u l t u r a l -s u r v iva l -q u a r t e r l y / standing-stones-cyberspace-oneida-indian-nations-territory.
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
61
Birhane, Abeba. 2020. Algorithmic Colonization of Africa. SCRIPTed 17 (2): 389–409. https://doi. org/10.2966/scrip.170220.389. Bowker, Geoffrey C., and Susan Leigh Star. 1999. Sorting Things Out: Classification and Its Consequences, Inside Technology. Cambridge: MIT Press. Bughin, Jacques, Jeongmin Seong, James Manyika, Michael Chui, and Raoul Joshi. 2018. Notes from the AI Frontier: Modeling the Impact of AI on the World Economy. McKinsey & Company. https://www.mckinsey.com/featured-insights/artificial-intelligence/ notes-from-the-AI-frontier-modeling-the-impact-of-ai-on-the-world-economy. Canazza, Mario R. 2016. The Internet as a Global Public Good and the Role of Governments and Multilateral Organizations in Internet Governance, July. https://www.academia.edu/36553078/ The_Internet_as_a_Global_Public_Good_and_the_Role_of_Governments_and_Multilateral_ Organizations_in_Internet_Governance. Cath, Corinne, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo, and Luciano Floridi. 2018. Artificial Intelligence and the “Good Society”: The US, EU, and UK Approach. Science and Engineering Ethics 24 (2): 505–528. https://doi.org/10.1007/s11948-017-9901-7. Chappell, Bill. 2021. The Facebook Papers: What You Need to Know About the Trove of Insider Documents. NPR, October 25, sec. Media. https://www.npr.org/2021/10/25/1049015366/ the-facebook-papers-what-you-need-to-know. Choo, Mabel, and Mark Findlay. 2021. Data Reuse and Its Impacts on Digital Labour Platforms. https://doi.org/10.2139/ssrn.3957004. Condliffe, Jamie. 2020. Big Tech Says It Wants Government to Regulate AI. Here’s Why. Protocol – The People, Power and Politics of Tech, February 12. https://www.protocol.com/ ai-amazon-microsoft-ibm-regulation. Cotterrell, Roger. 1997. Law’s Community: Legal Theory in Sociological Perspective, Oxford Socio-Legal Studies. Oxford: Oxford University Press. https://doi.org/10.1093/acprof: oso/9780198264903.001.0001. Couldry, Nick, and Ulises Ali Mejias. 2021a. The Costs of Connection: How Data Is Colonizing Human Life & Appropriates It for Capitalism, February 27. https://www.youtube.com/ watch?v=54_aftTZxWI. ———. 2021b. The Decolonial Turn in Data and Technology Research: What Is at Stake and Where Is It Heading? Information, Communication & Society: 1–17. https://doi.org/10.108 0/1369118X.2021.1986102. Crawford, Kate. 2021. Data. In The Atlas of AI, Power, Politics, and the Planetary Costs of Artificial Intelligence, 89–121. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t.6. Ely, Adrian. 2021. Transformations: Theory, Research and Action. In Transformative Pathways to Sustainability: Learning Across Disciplines, Cultures and Contexts. London: Routledge. https://doi.org/10.4324/9780429331930. Findlay, Mark. 2020. AI Technologies, Information Capacity, and Sustainable South World Trading. In Artificial Intelligence for Social Good. Keio University and APRU. https://apru. org/apru-releases-ai-for-social-good-report-in-partnership-with-un-escap-and-google-report- calls-for-ai-innovation-to-aid-post-covid-recovery/. ———. 2021. Globalisation, Populism, Pandemics and the Law | The Anarchy and the Ecstasy. Edward Elgar. https://www.e-elgar.com/shop/gbp/globalisation-populism-pandemics-and-the- law-9781788976848.html. Findlay, Mark, and Ong, Li Min. 2022. ‘Reflection on Wise Cities and AI in Community: Sustainable Life Spaces and Kampung Storytelling’. In Vol. 1. ASEAN Law Research Network E-Paper Series. Centre for Commercial Law in Asia, Singapore Management University. https://ccla. smu.edu.sg/sites/cebcla.smu.edu.sg/files/asean-perspective/2022-03/SMU%20ASEAN%20 Perspectives%20-%20Paper%2002%3A2022.pdf Findlay, Mark, and Josephine Seah. 2020. Data Imperialism: Disrupting Secondary Data in Platform Economies Through Participatory Regulation. https://doi.org/10.2139/ssrn.3613562.
62
L. M. Ong and M. Findlay
Findlay, Mark, and Willow Wong. 2021. Trust and Regulation: An Analysis of Emotion. SSRN Scholarly Paper ID 3857447. Social Science Research Network. https://doi.org/10.2139/ ssrn.3857447. Findlay, Mark, and Willow Wong. 2022. ‘Kampong Ethics’. In Reframing AI Governance: Perspectives from Asia. Digital Futures Lab | Konrad-Adenauer-Stiftung. https://www.ai-in- asia.com/06-kampong-ethics Fourcade, Marion, and Kieran Healy. 2013. Classification Situations: Life-Chances in the Neoliberal Era. Accounting, Organizations and Society 38 (8): 559–572. https://doi. org/10.1016/j.aos.2013.11.002. Freuler, Juan Ortiz. 2019.The Shape of the Internet:ATale of Power & Money. Medium (Blog), October 2. https://juanof.medium.com/the-shape-of-the-internet-a-tale-of-power-money-a08d01065bc0. Grin, John, Jan Rotmans, and Johan Schot. 2010. Transitions to Sustainable Development: New Directions in the Study of Long Term Transformative Change. New York: Routledge. https:// doi.org/10.4324/9780203856598. Gwagwa, Arthur, Deb Raji, and Natalie Kerby. 2021. Episode 3: Data & Automation. Public Books 101, May 31. https://www.publicbooks.org/episode-3-data-automation/. Jordan, Tim. 1999. Cyberpower. In Cyberpower. Routledge. Joyce, Kelly, Laurel Smith-Doerr, Sharla Alegria, Susan Bell, Taylor Cruz, Steve G. Hoffman, Safiya Umoja Noble, and Benjamin Shestakofsky. 2021. Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change. Socius 7 (January): 2378023121999581. https://doi.org/10.1177/2378023121999581. Kaack, Lynn H., Priya L. Donti, Emma Strubell, and David Rolnick. 2020. Artificial Intelligence and Climate Change | Opportunities, Considerations, and Policy Levers to Align AI with Climate Change Goals, December 16. Katzenbach, Christian. 2021. “AI Will Fix This” – The Technical, Discursive, and Political Turn to AI in Governing Communication. Big Data & Society 8 (2): 20539517211046184. https://doi. org/10.1177/20539517211046182. Kwet, Michael. 2019. Digital Colonialism: US Empire and the New Imperialism in the Global South. Race & Class 60 (4). https://doi.org/10.1177/0306396818823172. Larsson, Stefan. 2019. The Socio-Legal Relevance of Artificial Intelligence. Droit et Societe 103 (3): 573–593. https://www.cairn-int.info/journal-droit-et-societe-2019-3-page-573.htm?WT. tsrc=cairnPdf. Liu, Wendy. 2020. Coronavirus Has Made Amazon a Public Utility – So We Should Treat It Like One. The Guardian, April 17, sec. Opinion. https://www.theguardian.com/commentisfree/2020/ apr/17/amazon-coronavirus-public-utility-workers. Loo, Jane, Josephine Seah, and Mark Findlay. 2021. The Vulnerability Project: Migrant Workers in Singapore. SSRN Scholarly Paper ID 3770485. Social Science Research Network. https:// doi.org/10.2139/ssrn.3770485. MacKenzie, Donald A., and Judy Wajcman. 1999. The Social Shaping of Technology. Open University Press. Madianou, Mirca. 2019. Technocolonialism: Digital Innovation and Data Practices in the Humanitarian Response to Refugee Crises. Social Media + Society 5 (3): 2056305119863146. https://doi.org/10.1177/2056305119863146. Marshall, Fiona, Patrick Van Zwanenberg, Hallie Eakin, Lakshmi Charli-Joseph, Adrian Ely, Anabel Marin, and J. Mario Siqueiros-García. 2021. Reframing Sustainability Challenges. In Transformative Pathways to Sustainability: Learning Across Disciplines, Cultures and Contexts. London: Routledge. https://doi.org/10.4324/9780429331930. Mohamed, Shakir, Marie-Therese Png, and William Isaac. 2020. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology 33 (4): 659–684. https://doi.org/10.1007/s13347-020-00405-8. Muñoz, Victor Manuel, Elena Tamayo Uribe, and Armando Guio Español. 2021. The Colombian Case: A New Path for Developing Countries Addressing the Risks of Artificial Intelligence.
A Realist’s Account of AI for SDGs: Power, Inequality and AI in Community
63
Global Policy Journal. https://www.globalpolicyjournal.com/articles/science-and-technology/ colombian-case-new-path-developing-countries-addressing-risks. Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. https://nyupress.org/9781479837243/algorithms-of-oppression. Ong, Li Min. 2021. Digital Borders: The Impact of Artificial Intelligence on Refugees. Medium (Blog), July 22. https://medium.com/@caidg/ digital-borders-the-impact-of-artificial-intelligence-on-refugees-fae0a1353811. Ortutay, Barbara. 2021. People or Profit? Facebook Papers Show Deep Conflict Within. AP NEWS, October 25. https://apnews.com/article/ the-facebook-papers-whistleblower-misinfo-trafficking-64f11ccae637cdfb7a89e049c5095dca. Oxford Insights. 2020. Government AI Readiness Index 2020. https://www.oxfordinsights.com/ government-ai-readiness-index-2020. Palmer, Annie. 2020. How Amazon Managed the Coronavirus Crisis and Came Out Stronger. CNBC, September 29. https://www.cnbc.com/2020/09/29/how-amazon-managed-the- coronavirus-crisis-and-came-out-stronger.html. Pisa, Michael. 2019. Developing Countries Seek Greater Control as Tech Giants Woo the “Next Billion Users”. Center For Global Development, February 5. https://www.cgdev.org/blog/ developing-countries-seek-greater-control-tech-giants-woo-next-billion-users. Pisa, Michael, and John Polcari. 2019. Governing Big Tech’s Pursuit of the “Next Billion Users”. Center For Global Development, February. https://www.cgdev.org/publication/ governing-big-techs-pursuit-next-billion-users. Remolina, Nydia. 2019. Open Banking: Regulatory Challenges for a New Form of Financial Intermediation in a Data-Driven World. SSRN Scholarly Paper ID 3475019. Social Science Research Network. https://doi.org/10.2139/ssrn.3475019. Remolina, Nydia, and Mark Findlay. 2021. The Paths to Digital Self-Determination – A Foundational Theoretical Framework. SSRN Scholarly Paper ID 3831726. Social Science Research Network. https://doi.org/10.2139/ssrn.3831726. Rheingold, Howard. 2000. The Virtual Community: Homesteading on the Electronic Frontier. Revised edition. Cambridge, MA: The MIT Press. Roe, Emery. 1994. Narrative Policy Analysis: Theory and Practice. https://doi. org/10.1215/9780822381891. Ryan, Frances. 2018. The Missing Link: Why Disabled People Can’t Afford to #DeleteFacebook. The Guardian, April 4, sec. Media. https://www.theguardian.com/media/2018/apr/04/ missing-link-why-disabled-people-cant-afford-delete-facebook-social-media. Sahbaz, Ussal. 2019. Artificial Intelligence and the Risk of New Colonialism. Horizons | CIRSD. http://www.cirsd.org/en/horizons/horizons-summer-2019-issue-no-14/ artificial-intelligence-and-the-risk-of-new-colonialism. Sapignoli, Maria. 2021. The Mismeasure of the Human: Big Data and the “AI Turn” in Global Governance. Anthropology Today 37 (1): 4–8. https://doi.org/10.1111/1467-8322.12627. Satariano, Adam, Karl Russell, Troy Griggs, Blacki Migliozzi, and Chang W. Lee. 2019. How the Internet Travels Across Oceans. The New York Times, March 10, sec. Technology. https://www. nytimes.com/interactive/2019/03/10/technology/internet-cables-oceans.html, https://www. nytimes.com/interactive/2019/03/10/technology/internet-cables-oceans.html. Scholz, Trebor. 2016. Platform Cooperativism: Challenging the Corporate Sharing Economy. RLS- NYC, January. https://rosalux.nyc/wp-content/uploads/2020/11/RLS-NYC_platformcoop.pdf. Sharon, Tamar. 2020. Blind-Sided by Privacy? Digital Contact Tracing, the Apple/Google API and Big Tech’s Newfound Role as Global Health Policy Makers. Ethics and Information Technology. https://doi.org/10.1007/s10676-020-09547-x. Sloane, Mona, and Emanuel Moss. 2019. AI’s Social Sciences Deficit. Nature Machine Intelligence 1 (8): 330–331. https://doi.org/10.1038/s42256-019-0084-6. Smith, Matthew, and Sujaya Neupane. 2018. Artificial Intelligence and Human Development: Toward a Research Agenda, April. https://idl-bnc-idrc.dspacedirect.org/handle/10625/56949.
64
L. M. Ong and M. Findlay
STEPS Centre. 2010. Innovation, Sustainability, Development: A New Manifesto. https://steps- centre.org/anewmanifesto/manifesto_2010/. ———. 2017. Innovation and Sustainability – The 3D Agenda. https://steps-centre. org/4-technology/. Stilgoe, J. 2019. Who’s Driving Innovation? New Technologies and the Collaborative State. Cham: Palgrave Macmillan. https://doi.org/10.1007/978-3-030-32320-2. Stirling, Andy. 2015. Emancipating Transformations: From Controlling “The Transition” to Culturing Plural Radical Progress. In The Politics of Green Transformations, 54–67. Routledge. Susskind, Jamie. 2018. Future Politics: Living Together in a World Transformed by Tech. Oxford/ New York: OUP Oxford. Szczepański, Marcin. 2019. Economic Impacts of Artificial Intelligence (AI). European Parliament | European Parliamentary Research Service. https://www.europarl.europa.eu/thinktank/en/ document/EPRS_BRI(2019)637967. Thompson, Debra Elizabeth. 2016. The Schematic State: Race, Transnationalism, and the Politics of the Census. Cambridge: Cambridge University Press. UN Secretary-General. 2020. Question of the Realization of Economic, Social and Cultural Rights in All Countries: UN. https://digitallibrary.un.org/record/3870748. UNDP. 2020. Human Development Report 2020. http://report.hdr.undp.org. Unver, Akin. 2021. Motivations for the Adoption and Use of Authoritarian AI Technology – Issues on the Frontlines of Technology and Politics. Carnegie Endowment for International Peace, October 19. https://carnegieendowment.org/2021/10/19/ motivations-for-adoption-and-use-of-authoritarian-ai-technology-pub-85510. Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 233. https://doi.org/10.1038/s41467-019-14108-y. Whittaker, Meredith. 2021. The Steep Cost of Capture. Interactions 28 (6): 50–55. https://doi. org/10.1145/3488666. World Bank. 2021. World Development Report 2021: Data for Better Lives. https://www.worldbank.org/en/publication/wdr2021. Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 1st ed. New York: PublicAffairs.
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies B. Sirmacek, S. Gupta, F. Mallor, H. Azizpour, Y. Ban, H. Eivazi, H. Fang, F. Golzar, I. Leite, G. I. Melsion, K. Smith, F. Fuso Nerini, and R. Vinuesa
Abstract In this chapter we extend earlier work (Vinuesa et al., Nat Commun 11, 2020) on the potential of artificial intelligence (AI) to achieve the 17 Sustainable Development Goals (SDGs) proposed by the United Nations (UN) for the 2030 Agenda. The present contribution focuses on three SDGs related to healthy and sustainable societies, i.e., SDG 3 (on good health), SDG 11 (on sustainable cities), and SDG 13 (on climate action). This chapter extends the previous study within B. Sirmacek Smart Cities, School of Creative Technologies, Saxion University of Applied Sciences, Enschede, The Netherlands S. Gupta Bonn Alliance for Sustainability Research, University of Bonn, Bonn, Germany e-mail: [email protected] F. Mallor · H. Eivazi FLOW, Engineering Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden e-mail: [email protected]; [email protected] H. Azizpour · H. Fang · I. Leite · G. I. Melsion · K. Smith Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Stockholm, Sweden e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected] Y. Ban Division of Geoinformatics, KTH Royal Institute of Technology, Stockholm, Sweden e-mail: [email protected] F. Golzar · F. Fuso Nerini Division of Energy Systems, Department of Energy Technology, KTH Royal Institute of Technology, Stockholm, Sweden Climate Action Centre, KTH Royal Institute of Technology, Stockholm, Sweden e-mail: [email protected]; [email protected] R. Vinuesa (*) FLOW, Engineering Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden Climate Action Centre, KTH Royal Institute of Technology, Stockholm, Sweden e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_5
65
66
B. Sirmacek et al.
those three goals and goes beyond the 2030 targets. These SDGs are selected because they are closely related to the coronavirus disease 19 (COVID-19) pandemic and also to crises like climate change, which constitute important challenges to our society. Keywords AI · SDGs
1 Introduction In the past years, driven by the increased capacity in acquisition, storage, and processing of data, artificial intelligence (AI) has emerged as a disruptive technology, affecting a broad scope of fields. As these capacities are only increasing, AI has cemented its impact on society as a whole, and it is therefore expected to play a crucial role in the achievement of the Sustainable Development Goals (SDGs) proposed by the United Nations (UN) (UN General Assembly (UNGA) 2015). As pointed out by Vinuesa et al. (2020a), AI can enable the achievement of 134 out of the 169 targets accompanying the SDGs. Nonetheless, AI can also act as an inhibitor of 59 of the SDG targets. Therefore, special care should be taken when deploying AI solutions at a large scale, and its impact (positive or negative) on society, economy, and the environment should be carefully assessed. The ongoing coronavirus disease (COVID-19) pandemic has shown the dangers that a major crisis can have on the urban population health. Moreover, it has been a prime example of the use of AI and big data, e.g., through the use of contact-tracing apps which have evidenced both the positives (effectiveness) and negatives (privacy, ethical issues) derived from the use of AI (Shahroz et al. 2021; Vinuesa et al. 2020b). In this regard, the current climate emergency presents itself as the next major crisis to be faced by our species. In this chapter, we focus our analysis on the impact of AI on the SDGs related to healthy and sustainable societies, i.e., SDG 3 (on good health), SDG 11 (on sustainable cities), and SDG 13 (on climate action). The chapter is structured as follows: firstly, impacts of AI adoption on health are assessed in Sect. 2. Secondly, in Sect. 3 we look at the role of AI in the achievement of sustainable cities. Then, we focus our attention to the possibilities enabled by AI when it comes to climate-action targets in Sect. 4.1. Lastly, general conclusions regarding the effect of AI on achieving healthy and sustainable societies are drawn, and an outlook is presented in Sect. 5.
2 Improved Health Through AI 2.1 Shortage of Healthcare Workforce One main challenge within the health sector is the shortage of care staff, especially in developing countries. In 2006, the World Health Organization (WHO) estimated a global shortage of 4.3 million health workforce, identifying it as a crisis
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
67
(W. H. Organization 2006). Later in 2016, a WHO report projected a shortage of 18 million health workers by 2030 (W. H. Organization 2016a). While a large part of the shortage concerns lack of nursing staff, the lack of enough physicians and tertiary-care staff is also remarkable, even within developed countries (I. M. Ltd 2020; O. Publishing 2018), which is exacerbated when considering the capacity required for training specialists. Such a shortage is highly imbalanced against lowand middle-income (LAMIC) countries globally and rural areas within the individual countries. Furthermore, a recent Lancet report estimated 5.7 million deaths per year in LAMIC countries due to poor healthcare or lack thereof (Kruk et al. 2018). In some countries, mitigating this shortage of staff would require hundreds of years given the current medical-training infrastructure. The recent report by the UN High Commission on health employment and economic growth (W. H. Organization 2016b) puts forward recommendations to mitigate these issues, one of which is digital transformation of healthcare services. The AAMS report (I. M. Ltd 2020) points to the promise of artificial intelligence (AI) to address the demand for specialists in various domains. Recent advances of AI techniques, especially deep learning (LeCun et al. 2015), can help alleviate the severity of such shortage from numerous aspects, including (i) prioritizing care to patients under the limitation of resources such as care staff, medical equipment, or hospital beds; (ii) estimating the probability of having or risk of developing a medical condition given a patient’s family history or own historical data and examinations; (iii) monitoring patients and suggesting possible follow-ups, treatments, or patient’s outcome based on the patient’s condition, its severity, its risk of degradation, and available alternative actions; (iv) more efficient and less costly education and training of additional care staff; and (v) discovering more effective biomarkers and treatments. The primary focus of AI research in medicine has, so far, been on the automatic diagnosis of diseases and conditions using electronic health records (EHR) (Miotto et al. 2016; Rajkomar et al. 2018) and imaging data (Bejnordi et al. 2017; Esteva et al. 2017) or risk thereof based on patient’s own and family history (Suo et al. 2016) and radiological (Dembrower et al. 2020) or other types of imaging (Bora et al. 2021; Saba et al. 2019). Moreover, AI models can help identify the most appropriate course of action for possible follow-up examinations (Engstrom et al. 2021) and potential treatments (Xu et al. 2019); accordingly they can predict the patient’s outcome (Jin et al. 2021). AI methods assisting the health staff with diagnosis, screening, and prognosis can lead to a relative reduction of their workload (O. Publishing 2020); furthermore, a better risk model can help prioritize patients and focus the limited resources to reduce mortality and morbidity rates. In fact, patient triaging is an active area of research for the AI research in medicine with promising results (Kwon et al. 2018; Liang et al. 2020). While AI-assisted diagnosis, screening, prognosis, and triaging have potential for global application in the near future, there are other aspects where AI research has shown potential for a slightly more distant future. One central area is in AI assisting the training of healthcare staff (Shorten 2019) that can significantly reduce the cost of education, increase the efficiency of the trainings, and crucially enhance the agility of care training programs adapting to the emerging needs (W. H. Organization 2016b). recommends reducing barriers to education which can be facilitated by AI. Finally,
68
B. Sirmacek et al.
the most notable future applications of AI to help with the shortage of care staff are robotics surgery (Kinross et al. 2020) and discovery of more efficient and accurate biomarkers (Wang et al. 2019) and treatments (Chen et al. 2018), especially in light of AlphaFold’s recent breakthrough in computational biology (Jumper et al. 2021).
2.2 One Health and AI Pandemics such as COVID-19, Ebola, and cholera have grave consequences for health, economy, and society. Unless we understand comprehensively what causes them, they will emerge again and again. Usually, infectious diseases are often unleashed by microorganisms such as viruses and bacteria having very diverse origins. The change in land-use type and surrounding ecosystems brings humans in close proximity with wild species that could transmit unknown pathogens. Thus, the possible way to prevent epidemics and pandemics is to realize the interconnection between human, animal, and environmental health, as it is covered under the One Health domain. Target 3.3 of Agenda 2030 addresses aims to address concerns related to epidemics and other communicable disease. However, the real challenge is understanding the dynamics of the disease spread and how to better comprehend the vast amount of interdisciplinary data sources from the areas of interface between the health of humans, animals, plants, and the environment, fundamental to the One Health approach (Cook et al. 2004; Kim and Cha 2021). AI is supportive of addressing multiple challenges faced by the field of One Health. For instance, antimicrobial resistance (AMR) relation to infectious diseases was considered as one of the three One Health priorities during the tripartite (FAO-OIE-WHO) meeting of 2011 (W. H. Organization 2012). Recently, algorithms helped identify an antibiotic called Halicin from a vast digital collection of pharmaceutical compounds (Stokes et al. 2020). AI is also helping in the management of multidrug resistance by predicting infection risk, identifying the etiology and misuse of antibiotics, and estimating the risk of emergence (Beaudoin et al. 2016; Giacobbe et al. 2020). Researchers are already applying the AI capabilities to support clinical decisionmaking processes, such as radiology, dermatology, pathology, and ophthalmology, improving further the One Health infrastructure (Garcia-Vidal et al. 2019). AI is also supporting prognosis-related applications using electronic health record-based clinical decision support (Downing et al. 2019), generating alerts by an AI model that provides an early warning. AI models are also supporting to predict deterioration and identifying possible pathogens and antibiotic susceptibility (Alam et al. 2014). At a broader level AI is also helping to link diverse remote-sensing data sources for diverse One Health sub-domains (Chapman et al. 2018; Traore et al. 2017).
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
69
2.3 GeoAI for Precision Medicine AI and data-science techniques are supportive of developing efficient, accurate, and productive knowledge for healthcare and medicine, also known as Health Intelligence (HI) (Shaban-Nejad et al. 2018). AI is aiding in multiple aspects of HI Health, such as in syndromic surveillance with social media (Zeng et al. 2021), at- risk population prediction (Rajkomar et al. 2018), enabling mHealth services (Istepanian and Al-Anzi 2018), and medical imaging analysis (Panayides et al. 2020). Integrating multiple data health information with spatial context, geospatial artificial intelligence (GeoAI) represents a focused application of AI within health intelligence to extract precise location-relevant information that helps in taking concrete action to improve health and well-being (Hu et al. 2018). GeoAI is helping to integrate mobile health (mHealth) information in precision medicine by consolidating information on exposures to environmental factors such as air, noise, luminescence, etc., with location to improve the spatio-temporal information for precision medicine (Johnston et al. 2018). GeoAI further supports precision medicine with geomedicine, a sub-domain that deals with individuals’ location history for disease diagnosis and treatment (Boulos and Le Blond 2016). GeoAI capabilities help clinicians access patient’s health considering crucial aspects related to ambient exposures to environmental risk factors of where they lived, worked, and traveled for tailored prevention and treatment strategies. However, methodological challenges concerning the limited availability of labeled training datasets, scarce standards and protocols for integrating diverse data sources, and data-privacy concerns need to be recognized for sustainable development.
2.4 Ethical and Societal Considerations Previous studies have placed SDG 3 (on good health and well-being) in a unique position, where AI can significantly contribute towards it achievement (Gupta et al. 2021; Palomares et al. 2021). Vinuesa et al. (2020a) found SDG 3 to be the goal where AI could have the least inhibitory effect while showing a great potential to bring several of its targets forward. However, the socio-ethical context of how and where AI technology is used in healthcare systems could result in an increase of inequalities between different population groups and nations, hence hindering its capabilities to act as enabler of other SDGs, e.g., SDG 10 on reducing inequalities, and/or progressing at a lower rate among population groups with, e.g., lower AI literacy or ability to access the technology itself (Fenech and Buston 2020; Wakunuma et al. 2020). Fenech and Buston (2020) investigated the perception of healthcare professionals, technologists, ethicists, and patients about the challenges of introducing AI into healthcare systems and they found that ethical, social, and political questions were raised across various aspects: from the change within the relationships between patients and healthcare professionals and their acceptance of
70
B. Sirmacek et al.
AI in a health setting, to the implications of collaboration between public and private sectors and its regulation, while also going through the concerns of responsible data handling, transparency, and its impacts on existing health inequalities. In line with the shift of the patient-clinician relationship, there is concern regarding the responsibility that healthcare workers will hold on educating the patient about the complexities of AI and its possible shortcomings, or even in which cases it would be required to notify that AI is being used at all (Gerke et al. 2020). This could also come with an additional burden for health professionals that may be required to get specialized training with the latest advances in the field (currently a relevant matter of global discussion (E. Commission 2018; W.H.O 2019)), which might create a rebound effect increasing their workload – becoming more evident in developing countries due to the current digital divide (Zayyad and Toycan 2018). In this sense, understanding of healthcare workforce’s perceptions of AI is crucial for a successful implementation and deployment of new systems (Shinners et al. 2021) and also to increasing their trust on AI since this may be an issue causing hesitancy in their predisposition to use the technology, hence ultimately providing the right tool to foster a partnership between the clinicians and AI (Verghese et al. 2018). A major concern regarding the applicability of AI into healthcare systems is the collection, handling, and use of patient data, due to the fact that these systems require high amounts of personal health information to accomplish accurate results. In their review of the ethics literature in AI applications for good health, Murphy et al. (2021) found that three out of four common themes being discussed by researchers revolved around the impacts of collecting and using data – namely, privacy and security, trust in AI applications, and adverse consequences of bias. People may be reluctant with sharing health information if a secure and reliable process is not in place to ensure privacy and an ethical use of it (Vinuesa et al. 2020b). For instance, the risk of the data being hacked is a reason why patients may decide not moving their health information to a digital, cloud-based format, but also the potential misinformation about the ways and application where their data is going to be used (Luxton 2014; Murphy et al. 2021). The possibility of the same data collected for healthcare systems to be valuable for unrelated applications of different corporations or governments is an important threat to users’ trust in these systems, such as psychological data potentially being used for assessing prisoners’ recidivism or by insurance companies to rate their investment risk (Luxton 2014). Then, a strong legal framework is necessary to ensure transparency also on the boundaries to which personal medical records could be used. An important example is how to handle the proportionality of data sharing that is required to advance the development of AI systems (Petersen et al. 2019), e.g., the amount or how long different stakeholders are able to hold onto the data, as in the case of DeepMind giving access to 1.6 billion UK citizens’ medical records indefinitely to improve an application for managing acute kidney injuries (Powles and Hodson 2017). In terms of the problematic of bias, there are different aspects that may play a role when it comes to the entire life cycle and development of AI systems. The lack of diversity in gender, ethnicity, and socio-economic background of the people developing AI solutions is an important factor to address the bias in AI research and
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
71
development (West et al. 2019), together with the fact of close assessment of the data used for training of the systems. There have been several examples of AI algorithms that produce biased results because of certain groups of the population being under-represented in the datasets (Buolamwini and Gebru 2018; Obermeyer et al. 2019; O’neil 2016; Zhao et al. 2017; Zou and Schiebinger 2018), which can increase mistrust in these systems. Moreover, the interpretation given to the dataset in use may also cause inadvertent biased predictions when the proxies that drive an algorithmic decision are unfair towards a certain group. This has been the case for an AI system used in the USA to determine patients that will require complex and intensive future healthcare needs (Obermeyer et al. 2019). A black patient with the same risk scores as a white one would be less likely to be enrolled into the program because there exists a historic difference in the cost of healthcare between ethnicities, which is the variable used for the prediction – cost is unbiased from an underlying data point of view, yet an imperfect proxy that fails in taking into account the important social perspective causing biased predictions (Obermeyer et al. 2019). In this sense, there is an ongoing debate of the accountability and liability related to the recommendations and decisions made by AI systems (Luxton 2014; Murphy et al. 2021; Vinuesa and Sirmacek 2021), and how to determine who should be held responsible in the case of bad consequences of their outcomes – a topic discussed in computer ethics research for decades (Dennet 1997).
3 Towards Sustainable Cities with the Help of AI Rapid urbanization poses significant social and environmental challenges, including sprawling informal settlements, increased pollution, urban heat islands, loss of biodiversity and ecosystem services, as well as making cities more vulnerable to disasters. Therefore, timely and accurate information on urban areas and their changing patterns is of crucial importance to support planning sustainable and climate- resilient cities and communities. With its synoptic view and large area coverage at regular revisits, satellite remote sensing has been playing a crucial role in mapping the spatial patterns of urban areas and monitoring their temporal changing trajectories. Earth observation (EO) satellites are now acquiring massive amount of satellite imagery with higher spatial resolution and frequent temporal coverage. These EO big data represent a great opportunity to develop innovative methodologies for urban mapping and continuous change detection. The main challenge used to be the lack of robust and automated processing methods to extract valuable information from the huge volume of EO data. Advanced mathematical methods with development of AI architectures and processing platforms allowed rapid extraction of reliable information from such big data. In the following subsections we discuss the most frequently used AI architectures and applications in order to bring solutions for the sustainable development of cities aiming towards climate adaptation.
72
B. Sirmacek et al.
3.1 AI for Extracting Climate-Related Indicators from Cities Urban areas are responsible for creating heat islands which cause very high ecological stress to the environment. This stress is not only caused by the urban areas but also by the surrounding areas even if they are not occupied by human activities. This phenomenon is known as urban heat island (UHI) effect (Manoli et al. 2019). For accurate identification of the heat island sources and the environmental changes within the urban areas, many IT-infrastructured (also known as “smart”) cities have been putting efforts and resources to collect a good amount of data which might be helpful to understand the stress factors. Thus in many smart cities, citizens, government institutions, industry, and scientists share data for the benefit of all (this relation is also known as “Quintuple Helix model” (Carayannis et al. 2012)). Obviously, this leads to a great amount of data collection and the need for machine-learning (ML) or artificial-intelligence (AI) models which can be used for understanding the climate impacts and develop preparedness aligned with the SDGs. AI can enable various applications to support cities. Figure 1 shows the AI-based applications which may have the most potential to provide immediate benefit for climate-related observation and preparedness. There might be even more applications which can be achieved by AI algorithms; however, herein we keep our focus on mapping, predictive modeling, generative modeling, and explainability applications. In the following subsections, we will discuss each of these application areas in detail. 3.1.1 Mapping For observing climate adaptation of large areas in sustainable manner, the most frequently used data comes from satellite imaging. Satellite remote sensing allows us to collect data and information about earth surface, oceans, and the atmosphere at several spatio-temporal scales in a timely, regular, and accurate manner (Yang et al. 2013). Satellite data help us understand the climate system generally and it might help to identify ways to adapt urban regions for the drastic impacts of climate change. Various organizations like NASA, NOAA, ESA, and JAXA use satellite data to monitor greenhouse gas concentration in the atmosphere, weather patterns, Fig. 1 The most frequently needed application areas of AI for observing climate adaptation of urban areas
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
73
vegetation health, melting of glaciers and polar ice, bleaching of coral reefs, ocean acidification, changes in wildlife migratory patterns, and many other environment indicators. When it comes to urban areas, such maps are useful to identify changes of the urban structures, vegetation, agriculture, air quality, surface temperature, and so on. Besides satellite imagery, it is also possible to collect data about urban regions using airborne sensors and other in-situ Internet-of-things (IoT) sensors. Higher- resolution data achieved from such resources might enrich the information given in the maps as well. In Fig. 2, we provided some of the valuable information which can be extracted and visualized in urban maps to observe their climate adaption. AI algorithms can help with the following areas: • Automatic identification and mapping of trees (Pibre et al. 2017) • Early recognition of forest fires (Zhang et al. 2021b) • Measuring the earth surface temperature and predicting the urban heat island impacts (Khalil et al. 2021) • Detecting roads and traffic density (Boukerche et al. 2020) • Creating 3D building models (Wichmann et al. 2018) • Understanding agriculture health for food security (Lakshmi and Corbett 2020) • Understanding soil health and properties (Motia and Reddy 2021) • Observing water qualities (Theyazn et al. 2020) • Observing air pollution (Ayturan et al. 2018) • Predicting and mapping air flow (Guemes et al. 2021) • Analyzing and merging IoT data (Allam and Dhunny 2019)
Predict I and indicator changes
Trees
Predictive Models Forest fires
Building early warning systems (smoke detectors)
Surface temperature
Making high resolution data from low resolution measurements
Roads Buildings
Predicting sensor measurements of an area which hasn’t real sensor measurements
Generative Models Creating what-if scenarios
Agriculture
Creating good/bad examples as a suggestion tool
Mapping Soil
Anomaly detection
Water
Explaining correlations between various IoT data
Air pollution Air flow Biodiversity IoT
Interpretible models Explainability Explaining cause/effect Explaining bias Teaching important indicators to humans
Fig. 2 Additional application areas where AI can help for developing climate-adaptation applications for urban areas
74
B. Sirmacek et al.
There are, of course, more application areas where AI algorithms help with creating maps which are useful to understand the sustainable development needs and to provide opportunity to create further action plans. However, it is challenging to address all of them; therefore, we kept our focus on the most-frequently focused application areas. 3.1.2 Predictive Models One of the most impactful features of AI is its capability to help with building effective predictive modeling algorithms. AI models can allow fitting predictive models for data which have high numbers of degrees of freedom and exhibit non-linearities (Heaviside et al. 2016). Long short-term memory (LSTM) for instance (an artificial recurrent neural network architecture used in the field of predictive modeling) is able to store information over a period of time. In other words, LSTM networks have a memory capacity for both long- and short-term periods of data. This characteristic is extremely useful when we deal with time-series data. LSTM models can decide which time-series information to remember and which information to discard while creating the predictive model and making future predictions. Thus, such AI models are more robust than the earlier mathematical models (Hochreiter and Schmidhuber 1997). Scientists have found opportunities to use such advanced AI models for observing climate adaptation of urban areas. Advancements of AI, therefore, allowed prediction of future heat island impacts (Khalil et al. 2021), water security (Vulova et al. 2021), and further climate-adaptation goals for the future. 3.1.3 Generative Models AI might yield further applications for SDG 11 with its generalization capabilities. To this end, a special AI structure called generative adversarial network (GAN) learns deep representations without extensive annotated training data. They achieve this by deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution, and classification (Creswell et al. 2018; Jabbar et al. 2021). In the context of climate adaptation of urban areas, generative models are frequently used for data augmentation, which helps to create labels for other deep-learning applications (Howe et al. 2019). They are also found useful for creating pixel-based and accurate semantic segmentations without seeing so many examples (Collier et al. 2018) and for creating super-resolution images from course satellite-based observations (Wang et al. 2020b).
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
75
3.1.4 Explainability It is possible to use explainable AI (XAI) and interpretable AI (also known as “interpretability”) methods to make the AI models and their results truly interpretable for general audience and provide further insights into the action goals of the policy makers (Vinuesa et al. 2020a). As an example of using the interpretability methods in the context of the SDGs, in an early study Vinuesa and Sirmacek (2021) illustrated that such interpretability methods could be used for tracking poverty in urban areas using satellite images and CNNs as developed by Jean et al. (2016). This work essentially identifies features such as night-light intensity, roofing material, distance to urban areas, etc., to predict the average economical consumption per capita and day. Vinuesa and Sirmacek (2021) showed that adding interpretability to this model would help to really understand the influence of each parameter on the outcome, yielding a more robust and useful tool to track poverty and coordinate actions. In fact, the symbolic representation may help to understand which of these factors should be supported or suppressed to shift the poverty situation of the region to a better level.
3.2 Non-intrusive Evaluation of Air Quality in Urban Areas Through AI When it comes to SDG 11 (on sustainable cities), we will focus on some relevant applications, including those aimed at extracting urban-development and environment-biodiversity indicators using fully automated AI methods with remote sensing and other IoT data collected from smart cities. These indicators provide opportunities to (i) effectively monitor SDG 11 indicator 11.3.1 on land-use efficiency; (ii) observe the alignment of smart cities with other SDGs; (iii) have early abnormality-detection possibilities when the indicators appear to be outliers; (iv) better understand which urban development and which environmental indicators provide the best models for observation of smart cities; (v) create realistic scenarios to know which urban-development indicators make a positive impact on the alignment of the smart city with the SDGs; and (vi) create disaster scenarios to actually know and be prepared for the cases of observing unexpected indicator values. Another area where AI has great potential for SDG 11 is the development of robust non-intrusive-sensing methods to be able to more accurately determine the pollution levels and regions of extreme temperature in urban areas. It is important to note that around 90% of the population in the European Union (EU) were subjected to pollution levels exceeding those recommended by the World Health Organization (WHO) between 2014 and 2016 based on data by the European Environment Agency (EEA). It is estimated that these pollution levels produce around 800,000 premature deaths per year in the EU (Lelieveld et al. 2019). When it comes to extreme temperatures, the UHI phenomenon (Manoli et al. 2019) mentioned above
76
B. Sirmacek et al.
was connected with around 70,000 deaths in Europe during the summer of 2003 (Heaviside et al. 2016). The great potential of AI in this context is further supported by the fact that currently available approaches for this are not accurate enough (Carpentieri 2013), and the EU is introducing the use of predictive models for pollutant-concentration measurements (EC Air Quality Framework Directive 1996). Through flow prediction it is possible to provide, based on limited information, information about the temporal and spatial dynamics of the complete flow field (or certain relevant sub-sets of the field), up to a certain level of accuracy. One approach to perform the prediction is to first decompose the flow into spatial basis functions, such that only their temporal dynamics needs to be predicted. This can be accomplished by means of a well-known procedure, the proper orthogonal decomposition (POD), which was introduced by Lumley in the context of turbulent flows (Lumley 1967). This methodology basically decomposes the spatio-temporal velocity signal into spatial modes and temporal coefficients. Certain studies have considered variations of this technique, for instance the extended proper orthogonal decomposition (EPOD) (Boree 2003), to perform predictions of the flow based on sparse pressure measurements (Hosseini et al. 2015). In the EPOD framework, so-called extended velocity modes can be defined by combining information from the measured pressure and velocity signals, thereby allowing predicting the velocity field from pressure readings. Certain properties of the EPOD were employed by Hosseini et al. (2016) to predict the wake of a wall-mounted obstacle (representing a single simplified building) from pressure readings on its leeward side. Note that if all the possible extended modes are used for the reconstruction, the EPOD method is equivalent to a linear stochastic estimation (LSE) of the predicted quantity (Boree 2003). It is however important to note that the EPOD framework essentially considers a linear relationship between the measured and predicted quantities, which is insufficient to obtain accurate predictions given the complexity of the turbulent flow in urban environments. This was evaluated by Mokhasi et al. (2009), who reached the conclusion that it is possible to obtain significantly better predictions of the temporal dynamics in such cases by using non-linear prediction methods. So far we have discussed one approach to flow reconstruction mainly relying on first performing flow decomposition into spatial basis functions and then predicting the temporal evolution of the mode amplitudes via linear or non-linear methods. In some cases it is convenient to directly reconstruct the temporal evolution of the flow field (without a previous decomposition step) in certain regions of the domain: for instance, if we are interested in obtaining an accurate evolution of the flow on a certain horizontal or vertical plane. There have been some attempts to accomplish this type of reconstruction in the literature using linear methods. For instance, Illingworth et al. (2018) employed a linear dynamical-system approach based on the resolvent-analysis framework (McKeon and Sharma 2010) to predict the velocity field on a horizontal plane based on the velocity field from another horizontal plane, both of them being in the logarithmic region of a turbulent channel flow. On the other hand, other recent studies (Encinar and Jiménez 2019; Suzuki and Hasegawa
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
77
2017) employed LSE to predict different horizontal planes of the flow in a turbulent channel based on wall measurements such as the pressure and the two components of the wall-shear stress. Sasaki et al. (2019) recently assessed flow-reconstruction methods based on single- and multiple-input linear transfer functions, which can then be used as convolution kernels to predict the fluctuations in a spatially developing turbulent boundary layer. In particular, they performed predictions of the near- wall flow based on horizontal velocity fields in the outer region, and they also reconstructed the flow based on wall measurements. Note that the linear methods are able to provide only modest predictions close to the plane used as an input, and the accuracy of the reconstruction rapidly degrades farther away. This is because turbulent flows exhibit both linear (superposition) and non-linear (modulation) scale-interaction phenomena (Dogan et al. 2019); therefore, linear methods only provide an incomplete prediction. In fact, Sasaki et al. (2019) also documented significant improvements in the predictions when using non-linear transfer functions to relate the input and the output. Recent work by Guastoni et al. (2021) reports a flow-reconstruction analysis in a turbulent open channel, where they predicted the turbulent fluctuations on different horizontal planes using the spatial distribution of the two wall-shear-stress components and the wall pressure. To this end, they employed a particular type of neural network, namely, the so-called convolutional neural network (CNN) (LeCun et al. 1998), which is widely used in computer vision. To summarize their results, close to the wall they were able to predict the streamwise fluctuation peak with less than 1% error, and farther away from the wall they obtained good results using a combination of a CNN and POD (Guastoni et al. 2021). Despite the fact that this study was conducted in the context of turbulent channels, more complex geometries such as simplified urban environments (Stuck et al. 2021; Vinuesa et al. 2015) can also be considered, including other quantities such as temperature and pollutant concentration. In fact, Güemes et al. (2021) documented the potential of using GANs (which are discussed above) for predictions where few sparse measurements are available, and several additional studies have reported the possibility of using long short-term memory (LSTM) networks for temporal predictions in turbulence (Eivazi et al. 2021; Srinivasan et al. 2019). Consequently, deep neural networks are an excellent choice to predict horizontal (or vertical) sections of the flow field (as well as temperature and pollutant concentration) using wall data, thereby significantly improving currently available prediction techniques in urban flows. A schematic representation of the process is shown in Fig. 3, and we argue that AI can certainly contribute towards the achievement of higher air quality in urban environments via sparse measurements. Also, to address the gap caused by the sparsely distributed air-quality monitoring networks at the city level, Gupta et al. (2018a, b) proposed the simulated annealing-based optimization method to capture data with higher precision at the city level, with the opportunity to enable more inclusive air-quality data collection and encourage citizen participation.
78
B. Sirmacek et al.
Fig. 3 Schematic representation illustrating (top) AI application to non-intrusive prediction of flow and pollutants in an urban environment and (bottom) application of GANs to super-resolution and prediction from sparse measurements in a turbulent flow. (Bottom panel reprinted from Guemes et al. (2021), with permission of the publisher (AIP Publishing))
3.3 The Role of AI in Efficient and Sustainable Urban Mobility The urgent need for improving transportation within the urban environment is directly addressed in target 11.2 on affordable and sustainable transport systems. Moreover, an increased transportation efficiency, both from a sustainability and a connectivity perspective, would enable the achievement of targets 11.6 (reduced environmental impact of cities) and 11.a (implementation of urban and regional planning and increased inter-urban population integration). Transportation accessibility and efficiency has long been identified in the literature as a critical factor for social cohesion and inclusion (Cass et al. 2005; Pooley 2016; Social Exclusion Unit 2003), and the implementation of an integrated and inter-modal transportation system in urban and metropolitan areas is a key factor towards achieving a livable city (Lowe 1990; Vuchic 2017). Furthermore, due to its current major dependency on internal combustion engines, transportation plays a big role in ambient air pollution, making it an area critical for any SDG-related intervention in the urban environment (European Environment Agency 2021). After a comprehensive literature review, we have identified three main areas in which AI can act as an enabling agent: guidance of urban transportation policy, urban-mobility planning and modeling, and (connected) autonomous vehicle development.
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
79
Fig. 4 Applications of AI that can help to achieve SDG 13
4 AI for Ambitious Climate-Action Targets In this section we will expand the evidence base and analysis on the role of AI in achieving SDG 13 on climate action, as well as other broader objectives related to the climate crisis, including but not limited to the achievement of the Paris Agreement. A summary of the areas where AI can help to achieve SDG 13 is provided in Fig. 4.
4.1 The Potential Role of Artificial Intelligence to Combat Climate Change Climate change could undermine the achievement of at least 72 targets across the SGDs, including outcomes for healthy and sustainable societies (Nerini et al. 2019; Romm 2018). Storms, droughts, fires, and flooding have become more frequent and
80
B. Sirmacek et al.
stronger (Field et al. 2012). Global ecosystems are unstable, including the agriculture and natural resources on which humanity depends. The intergovernmental report on climate change in 2018 reported that the world would encounter catastrophic consequences unless global greenhouse gas emissions are removed within 30 years (Allen et al. 2018). Yet year after year, these emissions rise. Addressing climate change includes mitigation (reducing emissions) and adaptation (preparing for unavoidable consequences). Both have multifaceted aspects. Mitigation of greenhouse gas (GHG) emissions requires improvements in electricity systems, transportation, buildings, industry, and land use. Adaptation needs planning for resilience and disaster management, given an understanding of climate and extreme events. Artificial intelligence (AI) has the potential to enhance global efforts to both mitigate GHG emissions and adapt the required planning (Gupta et al. 2021; Vinuesa et al. 2020a). There is evidence that AI advances will support the understanding of climate change and the modeling of its possible impacts. AI is helpful in dealing with several climate-change mitigation measures. For example, AI can help to capture patterns and process temperature change data and carbon emissions (Barnes et al. 2019; Wu et al. 2018), predict extreme weather events caused by climate change (Feng et al. 2019), recognize the effects of climate on health (Berrang-Ford et al. 2021), understand the energy needs and manage energy consumption (Aslam et al. 2020; Kim and Cho 2019), monitor the impact in biodiversity due to climate change (Dujon and Schofield 2019; Kulkarni and Di Minin 2021), transform the transportation system for decreasing carbon emissions and make it more efficient in energy management and routing (Alsrehin et al. 2019; Hu et al. 2019; Milojevic- Dupont and Creutzig 2021), monitor the impact on ocean (Lou et al. 2021), predict impacts for enabling precision agriculture (Sharma et al. 2020), support in smart recycling (Rutqvist et al. 2019), assist carbon capture and geo-engineering (Menad et al. 2019), and create awareness about climate impact (George et al. 2021). AI will support low-carbon energy systems with high integration of renewable energy and energy efficiency, which are all needed to address climate change.
4.2 AI in Support of Understanding Climate Change An extensive range of social areas are challenged by climate change. This fact demands remarkable adaptation to tackle future changes in weather patterns. AI has enhanced dramatically, provoking advancement in various research sectors, and also proposed in aiding climate analysis (Reichstein et al. 2019; Schneider et al. 2017). AI can be integrated to discovered climate connections by the Earth System Model (ESM) to support improved warnings of approaching weather features, like extreme weather events. While ESM development is of principal importance, a parallel emphasis on implementing AI to understand far more existing models and simulations is suggested (Huntingford et al. 2019). AI advances will support the understanding of climate change and the modeling of its possible impacts, therefore supporting adaptive capacity to climate change (Tripathi et al. 2006). AI techniques
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
81
are utilized to investigate a tremendous amount of unstructured and heterogeneous data and disclose and extract complex and sophisticated relations among the data without the demand of an explicit analytical model of those relations (Dewitte et al. 2021; Herweijer and Waughray 2018), supporting in understanding the climate anomalies (Yang et al. 2019). AI is advancing the way we understand the impact of climate change on biomass (Wu et al. 2019), hydrology (Goyal et al. 2014), and extreme events such as droughts (Yang et al. 2016) and support in taking mitigation measures (Buckland et al. 2019; Ghiggi et al. 2019). The application of AI techniques in extracting meaningful patterns and datasets from the rapidly increasing data deluge with the aim of coping with the challenges related to weather forecast, climate monitoring (Ghiggi et al. 2019), and decade-wise prediction is inevitable (Dewitte et al. 2021). Many AI techniques help identify inter-seasonal connections, linking potential climate-induced risk and aiding in adaptation planning, e.g., timely crop sowing, and mitigate natural disasters. Recent advancements such as drones and the IoT with AI support improve the efficiency of existing systems by offering possibilities to extend mission coverage with refined spatial and temporal resolutions. A recent contribution to the domain includes the hybrid frameworks powered with deep- learning techniques to classify images aiding in natural disasters such as avalanches, cyclones, and fires (Hern ́andez et al. 2021; Nijhawan et al. 2019). AI also plays a vital role in a wide range of responses to the climate crisis, mainly focused on mitigating existing emissions. Recent studies highlight the relevance of using AI in decreasing environmental emissions produced by industries and urban spaces (Jasim et al. 2020; Kharat and Devi 2021), and foster circular economy vision (Bag et al. 2021; Wilson et al. 2021).
4.3 AI in Support of Low-Carbon Energy Systems The obtainment of fuels and raw materials for electricity grid, the process of generating and storing electricity, as well as the transmission of electricity to end-use consumers called electricity system are responsible for around a quarter of human- caused greenhouse gas emissions each year (Change et al. 2014). Furthermore, since other energy-intensive sectors as buildings and transportation seek to replace GHG-emitting fuels, demand for low-carbon energy systems will grow. AI will contribute to rapid transition to low-carbon energy sources (like solar, wind, hydro, and nuclear) and decreasing the share of carbon-intensive sources (like natural gas, coal, and other fossil fuels). Renewable energy resources are appearing as sustainable alternatives to fossil fuels. They are much safer and cleaner than conventional fossil sources. With remarkable advancements in technology, the renewable energy sector has made outstanding progress in the last decade (Brockway et al. 2019). However, there are still a wide variety of challenges associated with renewable energies that can be addressed with the help of innovative techniques. AI can analyze the past, optimize the present, predict the future, and digitalize the energy sector. The
82
B. Sirmacek et al.
unpredictability of the available resource is one of the most significant challenges of producing renewable energy (Haupt et al. 2020). The electric grid is evolving rapidly with integrated variable renewable energy sources (Selleneit et al. 2020). Due to the inherent intermittence of renewable energy sources, the current grid encounters many challenges in combining the diversity of renewable energy (Haupt et al. 2020). The utility industry requires intelligent systems to improve the integration of renewable energies with the existing grid and let renewable energies play an equal role in the energy supply. The energy grid collects a large amount of data by interconnecting with devices and sensors. AI techniques could (i) systematically analyze a vast amount of data generated in plants; (ii) translate the complex data into visualizations and insights that everyone can take advantage of; (iii) discover, interpret, and communicate meaningful patterns in data; (iv) diagnose and understand the reason behind the patterns in data in the past; (v) predict what is most likely to happen in the future; (vi) apply data patterns towards effective decision-making; and finally (vii) make recommendations to be taken to affect the outcomes (Li et al. 2021b). This AI-based data-driven information will give the grid planners and operators new insights to plan and operate the grid more efficiently (Khosrojerdi et al. 2021). It also offers flexibility to the energy providers to cleverly adjust the supply with demand (Zhang et al. 2021a). While the biggest goal of AI in renewable energy is to manage intermittency, it can also offer improved safety, efficiency, and reliability. It can help understand the energy consumption patterns and identify the devices’ energy leakage and health (Wang et al. 2009). However, AI could be used to identify technically recoverable oil and gas resources and optimize the coal sector, reducing global fossil fuel prices and therefore reducing the competitiveness of renewable energy sources (Li et al. 2021a). The application of AI in the oil and gas industry is very quickly enhancing. AI gradually gets through various stages of the oil and gas industry, such as intelligent drilling, intelligent extraction, intelligent pipeline, intelligent refinery, etc., and it will become the future research direction (Li et al. 2021a). While AI can provide many advantages to the energy system, it can also cause some concerns like vulnerability to cyber-attacks, privacy and data ownership, and economic disruption (I. E. Agency 2017). Reported disruptions related to cyber-attacks in energy systems have been relatively small. Nevertheless, the increasing application of digitalized equipment and the growth of the Internet of things (IoT) in energy systems could make cyber-attacks easier and cheaper to organize (I. E. Agency 2017).
4.4 AI in Service of Energy Efficiency AI is believed to be the critical aspect in the energy systems, dealing with different energy practices (electricity, hydrogen-based fuels, wind, nuclear, solar, and other renewable sources, carbon capture) along with the end-use perspective (electrical appliances, transportation, heating, manufacturing, industry, and others) (Gómez- Bombarelli et al. 2018; Lee 2019; Raza and Khosravi 2015). AI can help in
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
83
underpinning energy diversity and localization for infrastructure planning, energy consumption forecasting, and intelligent controlling (Costamagna et al. 2019; Wang et al. 2020a). AI techniques are used for improving the design, manufacturing, and optimization of energy-consumption-related aspects and also identifying the optimal materials, improving the safety of energy use (Dostatni 2018; Guo et al. 2018; Solano et al. 2017; Ullah et al. 2020). AI could support the usage of smart grids, which in turn could increase the efficiency of local and global energy systems. There are two enabling factors for making a smart grid possible. In the first place, the deployment of modern IoT devices allows increasing the quantity and quality of data obtained from the network. Secondly, the big data collected can now be processed by AI to obtain quick results on decision-making that would be impossible for human operators (Ali and Choi 2020). The power grid is a complicated adaptive system under semi-autonomous distributed control with a lot of uncertainties. The integration of renewable sources of energy such as solar and wind farms, electric and plug-in hybrid vehicles adds further complexity and challenges to the different levels of the power grid. Many efforts have been put into smart grid development to coordinate the interests of electric consumers, utilities, and environmentalists (Venayagamoorthy 2009). Real-time data in buildings and weather forecast, combined with smart systems, could predict when heating and cooling are needed, thus increasing system efficiency (I. E. Agency 2017; Yan et al. 2021). AI could conduct the active demand-side management for households in smart grids, which contain distributed solar photo-voltaic generation and energy storage (Di Santo et al. 2018). Smart demand response, for instance, could provide 185 gigawatts (GW) of electricity system flexibility, approximately equal to the currently combined installed electricity supply capacity of Australia and Italy (I. E. Agency 2017). Consensus exists among experts globally that our future energy supply should be economical, cleaner, and safer (Gielen et al. 2019). This in return will help in sustainable development by making electricity more affordable and accessible, decreasing GHG emissions, and efficient grid operations and reliable maintenance of power infrastructure, if used mindfully.
4.5 Engagement of AI on Climate Change AI can reduce costs, increase productivity, raise resource intensity, and enhance efficient public services (Vinuesa et al. 2020a). AI has been proposed as an enabler for new ambitious policy proposals for addressing climate change, such as being used for the implementation of personal carbon allowances (Fuso Nerini et al. 2021). However, there are also risks and downsides associated with AI that we all must be aware of being able to address any potential short−/long-term undesired impact (Gupta et al. 2020, 2021). AI can have a significant impact on global energy demand. Developed AI technology, research, and product design may require extensive computational resources, which are only accessible through advanced computing centers. Recent studies on the energy demand and emissions associated with
84
B. Sirmacek et al.
training and development of AI models have indicated the broader consequences of this rapid development (Henderson et al. 2020). Evidence is also emerging about the substantial climate impact of AI development (Lannelongue et al. 2021). These carbon footprints are associated mainly with the rapid development and training of AI algorithms with little consideration to the overall impact on the Earth system. Some estimates further show that the total electricity demand of ICT could grow up to 20% of the global electricity demand by 2030, from around 1% today (Masanet et al. 2020). With the increasing amount of data from diverse sources, the role of AI will steadily increase. In particular, AI will play a vital role considering the increasing debate on green, low-carbon electricity generation through optimal energy storage scenarios. Several efforts have been made to decrease the carbon footprint of data centers by investing in energy-efficient infrastructure and switching to renewable sources of energy (Karnama et al. 2019; Masanet et al. 2020). Considering the current state of the wide range of dependencies in one form or another on AI and associated services of AI systems (e.g., data collection and storage, hardware requirements and global shipments, training AI/ML models, etc.), uncertainty persists in realizing the comprehensive carbon footprint of AI. It is crucial to keep pace with the growing demand for AI infrastructure and whether efficiency gains by AI can be equally realized globally is an essential factor considering the environmental impact. The evidence to realize the net energy effects of AI and associated digital technologies is emerging. The indirect effects of using AI are likely to have a more considerable impact than the energy savings. The impact could be positive or negative depending on how mindfully it was utilized. Rebound and systemic effects are essential to be integrated to realize the complete picture of whether – or under which conditions and context – the AI services lead to a net positive or negative impact. Furthermore, the increased digitalization of strategic infrastructure exhibits some clear cyber-security challenges, increasing resource requirements along with time.
5 Conclusions and Outlook AI deployment has major consequences on society, on the economy and the environment, and consequently on the SDGs. As evidenced by the COVID-19 crisis, AI can be a tool to increase the resilience of urban populations during times of crisis, but it also has negative impacts. An understanding of these effects is essential so that we can tackle other important crises, such as the climate emergency. In this contribution we summarized the potential of AI to help achieve the SDGs related to healthy and sustainable societies, i.e., SDG 3 (on good health), SDG 11 (on sustainable cities), and SDG 13 (on climate action). When it comes to SDG 3, AI can help combat the shortage of healthcare workforce, which affects greatly the low- and middle-income (LAMIC) countries. There is potential in the optimization of available resources through triaging and improved diagnosis, as well as in more detailed
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
85
screening and prognosis. AI can also help in the context of automatic drug discovery or GeoAI for patient-location history, which may help against epidemics and pandemics. This is aligned with the One Health approach. This of course has socio- ethical concerns, including privacy and data handling, increased inequalities due to lower AI literacy or access, and the need for specialized training of healthcare professionals. In healthcare-decision assistance, there are problems with biased training datasets, which may lead to under-representation of certain ethnicities or social groups. Overall, there is tremendous potential in the context of this SDG, as long as the possible pitfalls are understood and properly handled. Regarding SDG 11, AI can help with various aspects to understand the climate impact and to prepare adaptation strategies for urban areas. To this end, satellite imaging and IoT-sensor-based data-collection methods are often preferred, because of their capacity to provide sustainable data in large areas over long periods of time. We discuss the use of AI for mapping and predictive/generative modeling, as well as the use of explainableAI methods in order to provide solutions to understand vegetation cover, wildfire spreading, heat island impacts, water security, air quality, and other applications to support climate adaptation. Explainable-AI methods are not only found useful for understanding the climate indicators in more depth, but they are also found important for increasing trust on AI models by bringing more transparency on their functionality (thus avoiding black-box modeling). Finally, prediction and pattern-recognition capabilities, which may help to better prepare for extreme weather events, monitor biodiversity, and can provide improved climate modeling, are areas where AI can help to achieve SDG 13. Also, increased energy efficiency through integration of largely varying renewable energy sources into the energy mix, together with consumption forecasting and grid optimization (smart grids), is a relevant area fueled by AI. In this context, attention must be paid to cyber-security in AI-driven electrical grids, due to possible disruptions and data- privacy problems. Finally, it is important to note that there is a large carbon footprint related with training complex and expensive AI models, and there is a strong need to decrease carbon footprint of the data centers used for model development. To conclude, the increased ability to acquire, process, and analyze large amounts of heterogeneous data is the main driver behind AI disruption. Pattern recognition and the reconstruction and predictive capabilities of the state-of-the-art AI models present great opportunities in achieving healthy and sustainable societies. There are already a number of applications of AI related to health, smart cities, and smart grids in use, proving its potential. Nevertheless, the increased complexity of these models, which (in the case of deep learning) essentially act as black boxes consuming vast amounts of data, could hinder some of the efforts towards achieving the SDGs, in relation to equality and climate change. Privacy, data management and governance, the carbon footprint associated with the training and deployment of AI models, as well as their interpretability are identified as key aspects which could be defining in the role of AI in achieving healthy and sustainable societies.
86
B. Sirmacek et al.
Acknowledgments RV acknowledges the support of the KTH Sustainability Office and the KTH Digitalization Platform. SG acknowledges the support provided by the German Federal Ministry for Education and Research (BMBF) in the project “digitainable.”
References Alam, N., E.L. Hobbelink, A.-J. van Tienhoven, P.M. van de Ven, E.P. Jansma, and P.W. Nanayakkara. 2014. The Impact of the Use of the Early Warning Score (EWS) on Patient Outcomes: A Systematic Review. Resuscitation 85 (5): 587–594. Ali, S.S., and B.J. Choi. 2020. State-of-the-Art Artificial Intelligence Techniques for Distributed Smart Grids: A Review. Electronics 9 (6): 1030. Allam, Z., and Z.A. Dhunny. 2019. On Big Data, Artificial Intelligence and Smart Cities. Cities 89: 80–91. ISSN 0264-2751. https://doi.org/10.1016/j.cities.2019.01.032. Allen, M., O. Dube, W. Solecki, F. Arag ́on-Durand, W. Cramer, S. Humphreys, M. Kainuma, J. Kala, N. Mahowald, Y. Mulugetta, et al. 2018. Global Warming of 1.5 °C. An IPCC Special Report on the Impacts of Global Warming of 1.5 °C Above Pre-industrial Levels and Related Global Greenhouse Gas Emission Pathways, in the Context of Strengthening the Global Response to the Threat of Climate Change, Sustainable Development, and Efforts to Eradicate Poverty. Alsrehin, N.O., A.F. Klaib, and A. Magableh. 2019. Intelligent Transportation and Control Systems Using Data Mining and Machine Learning Techniques: A Comprehensive Study. IEEE Access 7: 49830–49857. Aslam, S., A. Khalid, and N. Javaid. 2020. Towards Efficient Energy Management in Smart Grids Considering Microgrids with Day-Ahead Energy Forecasting. Electric Power Systems Research 182: 106232. Ayturan, A., Z. Ayturan, and H. Altun. 2018. Air Pollution Modelling with Deep Learning: A Review. International Journal of Environmental Pollution & Environmental Modelling 1: 58–62. Bag, S., J.H.C. Pretorius, S. Gupta, and Y.K. Dwivedi. 2021. Role of Institutional Pressures and Resources in the Adoption of Big Data Analytics Powered Artificial Intelligence, Sustainable Manufacturing Practices and Circular Economy Capabilities. Technological Forecasting and Social Change 163: 120420. Barnes, E.A., J.W. Hurrell, I. Ebert-Uphoff, C. Anderson, and D. Anderson. 2019. Viewing Forced Climate Patterns Through an AI Lens. Geophysical Research Letters 46 (22): 13389–13398. Beaudoin, M., F. Kabanza, V. Nault, and L. Valiquette. 2016. Evaluation of a Machine Learning Capability for a Clinical Decision Support System to Enhance Antimicrobial Stewardship Programs. Artificial Intelligence in Medicine 68: 29–36. Bejnordi, B.E., M. Veta, P.J. Van Diest, B. Van Ginneken, N. Karssemeijer, G. Litjens, J.A. VanDer Laak, M. Hermsen, Q.F. Manson, M. Balkenhol, et al. 2017. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer. JAMA 318 (22): 2199–2210. Berrang-Ford, L., A.J. Sietsma, M. Callaghan, J.C. Minx, P.F. Scheelbeek, N.R. Haddaway, A. Haines, and A.D. Dangour. 2021. Systematic Mapping of Global Research on Climate and Health: A Machine Learning Review. The Lancet Planetary Health 5 (8): e514–e525. Bora, A., S. Balasubramanian, B. Babenko, S. Virmani, S. Venugopalan, A. Mitani, G. de Oliveira Marinho, J. Cuadros, P. Ruamviboonsuk, G.S. Corrado, et al. 2021. Predicting the Risk of Developing Diabetic Retinopathy Using Deep Learning. The Lancet Digital Health 3 (1): e10–e19. Boree, J. 2003. Extended Proper Orthogonal Decomposition: A Tool to Analyse Correlated Events in Turbulent Flows. Experiments in Fluids 35: 188–192.
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
87
Boukerche, A., Y. Tao, and P. Sun. 2020. Artificial Intelligence-Based Vehicular Traffic Flow Prediction Methods for Supporting Intelligent Transportation Systems. Computer Networks 182: 107484. ISSN 1389-1286. https://doi.org/10.1016/j.comnet.2020.107484. https://www. sciencedirect.com/science/article/pii/S1389128620311567. Boulos, M.N.K., and J. Le Blond. 2016. On the Road to Personalised and Precision Geomedicine: Medical Geology and a Renewed Call for Interdisciplinarity. Internal Journal of Health Geographics 15: 5. https://doi.org/10.1186/s12942-016-0033-0. Brockway, P.E., A. Owen, L.I. Brand-Correa, and L. Hardt. 2019. Estimation of Global Final- Stage Energy-Return-on-Investment for Fossil Fuels with Comparison to Renewable Energy Sources. Nature Energy 4 (7): 612–621. Buckland, C., R. Bailey, and D. Thomas. 2019. Using Artificial Neural Networks to Predict Future Dryland Responses to Human and Climate Disturbances. Scientific Reports 9 (1): 1–13. Buolamwini, J., and T. Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the Conference onFairness, Accountability, and Transparency – FAT* ’19, Volume 81 of Proceedings of Machine Learning Research, ed. S.A. Friedler and C. Wilson, 1–15. PMLR. http://proceedings.mlr.press/v81/ buolamwini18a.html. Carayannis, E., T. Barth, and D. Campbell. 2012. The Quintuple Helix Innovation Model: Global Warming as a Challenge and Driver for Innovation. Journal of Innovation and Entrepreneurship 1: 1. https://doi.org/10.1186/2192-5372-1-2. Carpentieri, M. 2013. Pollutant Dispersion in the Urban Environment. Reviews in Environmental Science and Biotechnology 12: 5–8. Cass, N., E. Shove, and J. Urry. 2005. Social Exclusion, Mobility and Access. The Sociological Review 53 (3): 539–555. Change, I.C., et al. 2014. Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Vol. 1454. Cambridge University Press. Chapman, H., A. Omar, J. Haynes, and S. Estes. 2018. Linking Satellite Data to the “One Health” Approach. AGU Fall Meeting Abstracts 2018: GH34B–09. Chen, H., O. Engkvist, Y. Wang, M. Olivecrona, and T. Blaschke. 2018. The Rise of Deep Learning in Drug Discovery. Drug Discovery Today 23 (6): 1241–1250. Collier, E., K. Duffy, S. Ganguly, G. Madanguit, S. Kalia, G. Shreekant, R. Nemani, A. Michaelis, S. Li, A. Ganguly, and S. Mukhopadhyay. 2018. Progressively Growing Generative Adversarial Networks for High Resolution Semantic Segmentation of Satellite Images. In 2018 IEEE International Conferenceon Data Mining Workshops (ICDMW), 763–769. https://doi. org/10.1109/ICDMW.2018.00115. Cook, R., W. Karesh, and S. Osofsky. 2004. The Manhattan Principles on ‘One World One Health’. In One World, One Health: Building Interdisciplinary Bridges to Health in a Globalized World, 29. New York: Wildlife Conservation Society. Costamagna, P., A. De Giorgi, G. Moser, S.B. Serpico, and A. Trucco. 2019. Data-Driven Techniques for Fault Diagnosis in Power Generation Plants Based on Solid Oxide Fuel Cells. Energy Conversion and Management 180: 281–291. Creswell, A., T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A.A. Bharath. 2018. Generative Adversarial Networks: An Overview. IEEE Signal Processing Magazine 35 (1): 53–65. https://doi.org/10.1109/MSP.2017.2765202. D. Hern ́andez, J.-C. Cano, F. Silla, C.T. Calafate, and J.M. Cecilia. 2021. AI-Enabled Autonomous Drones for Fast Climate Change Crisis Assessment. IEEE Internet of Things Journal 9 (10): 7286–7297. Dembrower, K., Y. Liu, H. Azizpour, M. Eklund, K. Smith, P. Lindholm, and F. Strand. 2020. Comparison of a Deep Learning Risk Score and Standard Mammographic Density Score for Breast Cancer Risk Prediction. Radiology 294 (2): 265–272. Dennet, D.C. 1997. When HAL Kills, Who’s to Blame? Computer Ethics. In HAL’s Legacy: 2001’s Computer as Dream and Reality, ed. D.G. Stork, 351–365. MIT Press. ISBN 978-0-262-19378-8.
88
B. Sirmacek et al.
Dewitte, S., J.P. Cornelis, R. M̈uller, and A. Munteanu. 2021. Artificial Intelligence Revolutionises Weather Forecast, Climate Monitoring and Decadal Prediction. Remote Sensing 13 (16): 3209. Di Santo, K.G., S.G. Di Santo, R.M. Monaro, and M.A. Saidel. 2018. Active Demand Side Management for Households in Smart Grids Using Optimization and Artificial Intelligence. Measurement 115: 152–161. Dogan, E., R. Örlü, D. Gatti, R. Vinuesa, and P. Schlatter. 2019. Quantification of Amplitude Modulation in Wall-Bounded Turbulence. Fluid Dynamics Research 51: 011408. Dostatni, E. 2018. Recycling-Oriented Eco-design Methodology Based on Decentralised Artificial Intelligence. Management and Production Engineering Review 9: 79–89. Downing, N.L., J. Rolnick, S.F. Poole, E. Hall, A.J. Wessels, P. Heidenreich, and L. Shieh. 2019. Electronic Health Record-Based Clinical Decision Support Alert for Severe Sepsis: A Randomised Evaluation. BMJ Quality and Safety 28 (9): 762–768. Dujon, A.M., and G. Schofield. 2019. Importance of Machine Learning for Enhancing Ecological Studies Using Information-Rich Imagery. Endangered Species Research 39: 91–104. E. Commission. 2018. Communication on Enabling the Digital Transformation of Health and Care in the Digital Single Market; Empowering Citizens and Building a Healthier Society. https:// digital-strategy.ec.europa.eu/en/library/communication-enabling-digital-transformation- health-and-care-digital-single-market-empowering. Engstrom, E, F. Strand, and P. Strimling. 2021. Human-AI Interactions in a Trial of AI Breast Cancer Diagnostics in a Real-World Clinical Setting. EC Air Quality Framework Directive. 1996. European Commission, Ambient Air Quality Assessment and Management. Council Directive 96/62/EC. Eivazi, H., L. Guastoni, P. Schlatter, H. Azizpour, and R. Vinuesa. 2021. Recurrent Neural Networks and Koopman-Based Frameworks for Temporal Predictions in a Low-Order Model of Turbulence. International Journal of Heat and Fluid Flow 90: 108816. Encinar, M.P., and J. Jiménez. 2019. Logarithmic-Layer Turbulence: A View from the Wall. Physical Review Fluids 4: 114603. Esteva, A., B. Kuprel, R.A. Novoa, J. Ko, S.M. Swetter, H.M. Blau, and S. Thrun. 2017. Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks. Nature 542 (7639): 115–118. European Environment Agency. 2021. Europe’s Air Quality Status 2021, Briefing No. 08/2021. European Environment Agency. Fenech, M.E., and O. Buston. 2020. AI in Cardiac Imaging: A UK-Based Perspective on Addressing the Ethical, Social, and Political Challenges. Frontiers in Cardiovascular Medicine 7: 54. ISSN 2297-055X. https://doi.org/10.3389/fcvm.2020.00054. https://www.frontiersin. org/article/10.3389/fcvm.2020.00054. Feng, P., B. Wang, D. Li Liu, C. Waters, and Q. Yu. 2019. Incorporating Machine Learning with Biophysical Model Can Improve the Evaluation of Climate Extremes Impacts on Wheat Yield in South-Eastern Australia. Agricultural and Forest Meteorology 275: 100–113. Field, C.B., V. Barros, T.F. Stocker, and Q. Dahe. 2012. Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation: Special Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. Fuso Nerini, F., T. Fawcett, Y. Parag, and P. Ekins. 2021. Personal Carbon Allowances Revisited. Nature Sustainability 4: 1–7. Garcia-Vidal, C., G. Sanjuan, P. Puerta-Alcalde, E. Moreno-Garc ́ıa, and A. Soriano. 2019. Artificial Intelligence to Support Clinical Decision-Making Processes. eBioMedicine 46: 27–29. George, G., R.K. Merrill, and S.J. Schillebeeckx. 2021. Digital Sustainability and Entrepreneurship: How Digital Innovations Are Helping Tackle Climate Change and Sustainable Development. Entrepreneurship Theory and Practice 45 (5): 999–1027. Gerke, S., T. Minssen, and G. Cohen. 2020. Chapter 12 – Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare. In Artificial Intelligence in Healthcare, ed. A. Bohr and K. Memarzadeh, 295–336. Academic. ISBN 978-0-12-818438-7. https://doi.
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
89
org/10.1016/B978-0-12-818438-7.00012-5. https://www.sciencedirect.com/science/article/pii/ B9780128184387000125. Ghiggi, G., V. Humphrey, S.I. Seneviratne, and L. Gudmundsson. 2019. Grun: An Observation- Based Global Gridded Runoff Dataset From 1902 to 2014. Earth System Science Data 11 (4): 1655–1674. Giacobbe, D.R., S. Mora, M. Giacomini, and M. Bassetti. 2020. Machine Learning and Multidrug- Resistant Gram-Negative Bacteria: An Interesting Combination for Current and Future Research. Antibiotics 9 (2): 54. Gielen, D., F. Boshell, D. Saygin, M.D. Bazilian, N. Wagner, and R. Gorini. 2019. The Role of Renewable Energy in the Global Energy Transformation. Energy Strategy Reviews 24: 38–50. Gómez-Bombarelli, R., J.N. Wei, D. Duvenaud, J. M. Herńandez-Lobato, B. Śanchez-Lengeling, D. Sheberla, J. Aguilera-Iparraguirre, T.D. Hirzel, R.P. Adams, and A. Aspuru-Guzik. 2018. Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules. ACS Central Science 4 (2): 268–276. Goyal, M.K., B. Bharti, J. Quilty, J. Adamowski, and A. Pandey. 2014. Modeling of Daily Pan Evaporation in Sub Tropical Climates Using ANN, LS-SVR, Fuzzy Logic, and Anfis. Expert Systems with Applications 41 (11): 5267–5276. Guastoni, L., A. Güemes, A. Ianiro, S. Discetti, P. Schlatter, H. Azizpour, and R. Vinuesa. 2021. Convolutional-Network Models to Predict Wall-Bounded Turbulence from Wall Quantities. Journal of Fluid Mechanics 928: A27. Güemes, A., S. Discetti, A. Ianiro, B. Sirmacek, H. Azizpour, and R. Vinuesa. 2021. From Coarse Wall Measurements to Turbulent Velocity Fields Through Deep Learning. Physics of Fluids 33: 075121. Guo, H., X. Pu, J. Chen, Y. Meng, M.-H. Yeh, G. Liu, Q. Tang, B. Chen, D. Liu, S. Qi, et al. 2018. A Highly Sensitive, Self-Powered Triboelectric Auditory Sensor for Social Robotics and Hearing Aids. Science robotics 3 (20): eaat2516. Gupta, S., E. Pebesma, A. Degbelo, and A.C. Costa. 2018a. Optimising Citizen-Driven Air Quality Monitoring Networks for Cities. ISPRS International Journal of Geo-Information 7 (12): 468. Gupta, S., E. Pebesma, J. Mateu, and A. Degbelo. 2018b. Air Quality Monitoring Network Design Optimisation for Robust Land Use Regression Models. Sustainability 10 (5): 1442. Gupta, S., M. Motlagh, and J. Rhyner. 2020. The Digitalization Sustainability Matrix: A Participatory Research Tool for Investigating Digitainability. Sustainability 12 (21): 9283. Gupta, S., S.D. Langhans, S. Domisch, F. Fuso-Nerini, A. Fellander, M. Battaglini, M. Tegmark, and R. Vinuesa. 2021. Assessing Whether Artificial Intelligence Is an Enabler or an Inhibitor of Sustainability at Indicator Level. Transportation Engineering 4: 100064. Haupt, S.E., T.C. McCandless, S. Dettling, S. Alessandrini, J.A. Lee, S. Linden, W. Petzke, T. Brummet, N. Nguyen, B. Kosovi ́c, et al. 2020. Combining Artificial Intelligence with Physics-Based Methods for Probabilistic Renewable Energy Forecasting. Energies 13 (8): 1979. Heaviside, C., S. Vardoulakis, and X.-M. Cai. 2016. Attribution of Mortality to the Urban Heat Island During Heatwaves in the West Midlands, UK. Environmental Health 15: S27. Henderson, P., J. Hu, J. Romoff, E. Brunskill, D. Jurafsky, and J. Pineau. 2020. Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. Journal of Machine Learning Research 21 (248): 1–43. Herweijer, C., and D. Waughray. 2018. Fourth Industrial Revolution for the Earth Harnessing Artificial Intelligence for the Earth. A Report of Pricewaterhouse Coopers (PwC). Hochreiter, S., and J. Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9 (8): 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735. Hosseini, Z., R.J. Martinuzzi, and B.R. Noack. 2015. Sensor-Based Estimation of the Velocity in the Wake of a Low-Aspect-Ratio Pyramid. Experiments in Fluids 56: 13. ———. 2016. Modal Energy Flow Analysis of a Highly Modulated Wake Behind a Wall-Mounted Pyramid. Journal of Fluid Mechanics 798: 717–750. Howe, J., K. Pula, and A.A. Reite. 2019. Conditional Generative Adversarial Networks for Data Augmentation and Adaptation in Remotely Sensed Imagery. In Applications of Machine
90
B. Sirmacek et al.
Learning, ed. M.E. Zelinski, T.M. Taha, J. Howe, A.A.S. Awwal, and K.M. Iftekharuddin, vol. 11139, 119–131. International Society for Optics and Photonics, SPIE. https://doi. org/10.1117/12.2529586. Hu, Y., S. Gao, S.D. Newsam, and D.D. Lunga. 2018. Geoai 2018 Workshop Report the 2nd ACM Sigspatial International Workshop on Geoai: AI for Geographic Knowledge Discovery Seattle, WA, USA-November 6, 2018. ACM SIGSPATIAL Special 10 (3): 16. Hu, Z., Y. Jin, Q. Hu, S. Sen, T. Zhou, and M.T. Osman. 2019. Prediction of Fuel Consumption for Enroute Ship Based on Machine Learning. IEEE Access 7: 119497–119505. Huntingford, C., E.S. Jeffers, M.B. Bonsall, H.M. Christensen, T. Lees, and H. Yang. 2019. Machine Learning and Artificial Intelligence to Aid Climate Change Research and Preparedness. Environmental Research Letters 14 (12): 124007. I. E. Agency. 2017. Digitalization & Energy. IEA. I. M. Ltd. 2020. The Complexities of Physician Supply and Demand: Projections from 2018 to 2033. Washington, DC: AAMC. Illingworth, S.J., J.P. Monty, and I. Marusic. 2018. Estimating Large-Scale Structures in Wall Turbulence Using Linear Models. Journal of Fluid Mechanics 842: 146–162. Istepanian, R.S., and T. Al-Anzi. 2018. m-health 2.0: New Perspectives on Mobile Health, Machine Learning and Big Data Analytics. Methods 151: 34–40. Jabbar, A., X. Li, and B. Omar. 2021. A Survey on Generative Adversarial Networks: Variants, Applications, and Training. ACM Computing Surveys 54 (8). https://doi.org/10.1145/3463475. Jasim, O.Z., N.H. Hamed, and M.A. Abid. 2020. Urban Air Quality Assessment Using Integrated Artificial Intelligence Algorithms and Geographic Information System Modeling in a Highly Congested Area, Iraq. Journal of Southwest Jiaotong University 55 (1). https://doi. org/10.35741/issn.0258-2724.55.1.16. Jean, N., M. Burke, M. Xie, W.M. Davis, D.B. Lobell, and S. Ermon. 2016. Combining Satellite Imagery and Machine Learning to Predict Poverty. Science 353: 790–794. Jin, C., H. Yu, J. Ke, P. Ding, Y. Yi, X. Jiang, J. Tang, D.T. Chang, X. Wu, F. Gao, et al. 2021. Predicting Treatment Response from Longitudinal Images Using Multi-task Deep Learning. Nature Communications 12 (1): 1–11. Johnston, F., A. Wheeler, G. Williamson, S. Campbell, P. Jones, I. Koolhof, C. Lucani, N. Cooling, and D. Bowman. 2018. Using Smartphone Technology to Reduce Health Impacts from Atmospheric Environmental Hazards. Environmental Research Letters 13 (4): 044019. Jumper, J., R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A.ˇZ ́ıdek, A. Potapenko, et al. 2021. Highly Accurate Protein Structure Prediction with Alphafold. Nature 596 (7873): 583–589. Karnama, A., E.B. Haghighi, and R. Vinuesa. 2019. Organic Data Centers: A Sustainable Solution for Computing Facilities. Results in Engineering 4: 100063. Khalil, U., B. Aslam, U. Azam, and H.M.D. Khalid. 2021. Time Series Analysis of Land Surface Temperature and Drivers of Urban Heat Island Effect Based on Remotely Sensed Data to Develop a Prediction Model. Applied Artificial Intelligence 0 (0): 1–26. https://doi.org/10.108 0/08839514.2021.1993633. Kharat, R., and T. Devi. 2021. Artificial Intelligence in Environmental Management. In Artificial Intelligence Theory, Models, and Applications, 37–46. Auerbach Publications. Khosrojerdi, F., O. Akhigbe, S. Gagnon, A. Ramirez, and G. Richards. 2021. Integrating Artificial Intelligence and Analytics in Smart Grids: A Systematic Literature Review. International Journal of Energy Sector Management 16 (2): 318–338. Kim, D.-W., and C.-J. Cha. 2021. Antibiotic Resistome from the One-Health Perspective: Understanding and Controlling Antimicrobial Resistance Transmission. Experimental & Molecular Medicine 53 (3): 301–309. Kim, T.-Y., and S.-B. Cho. 2019. Predicting Residential Energy Consumption Using CNN-LSTM Neural Networks. Energy 182: 72–81. Kinross, J.M., S.E. Mason, G. Mylonas, and A. Darzi. 2020. Next-Generation Robotics in Gastrointestinal Surgery. Nature Reviews Gastroenterology & Hepatology 17 (7): 430–440.
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
91
Kruk, M.E., A.D. Gage, C. Arsenault, K. Jordan, H.H. Leslie, S. Roder-DeWan, O. Adeyi, P. Barker, B. Daelmans, S.V. Doubova, et al. 2018. High-Quality Health Systems in the Sustainable Development Goals Era: Time for a Revolution. The Lancet Global Health 6 (11): e1196–e1252. Kulkarni, R., and E. Di Minin. 2021. Automated Retrieval of Information on Threatened Species from Online Sources Using Machine Learning. Methods in Ecology and Evolution 12: 1226–1239. Kwon, J.-M., Y. Lee, Y. Lee, S. Lee, H. Park, and J. Park. 2018. Validation of Deep-Learning-Based Triage and Acuity Score Using a Large National Dataset. PLoS One 13 (10): e0205836. Lakshmi, V., and J. Corbett. 2020. How Artificial Intelligence Improves Agricultural Productivity and Sustainability: A Global Thematic Analysis. https://doi.org/10.24251/HICSS.2020.639. Lannelongue, L., J. Grealey, and M. Inouye. 2021. Green Algorithms: Quantifying the Carbon Footprint of Computation. Advanced Science 8: 2100707. LeCun, Y., L. Bottou, and P. Haffner. 1998. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE 86: 2278–2324. LeCun, Y., Y. Bengio, and G. Hinton. 2015. Deep Learning. Nature 521 (7553): 436–444. Lee, M.-H. 2019. Insights from Machine Learning Techniques for Predicting the Efficiency of Fullerene Derivatives-Based Ternary Organic Solar Cells at Ternary Blend Design. Advanced Energy Materials 9 (26): 1900891. Lelieveld, J., K. Klingmuller, A. Pozzer, U. P̈oschl, M. Fnais, A. Daiber, and T. M̈unzel. 2019. Cardiovascular Disease Burden from Ambient Air Pollution in Europe Reassessed Using Novel Hazard Ratio Functions. European Heart Journal 40: 1590–1596. Li, H., H. Yu, N. Cao, H. Tian, and S. Cheng. 2021a. Applications of Artificial Intelligence in Oil and Gas Development. Archives of Computational Methods in Engineering 28 (3): 937–949. Li, K., J. Tian, W. Xue, and G. Tan. 2021b. Short-Term Electricity Consumption Prediction for Buildings Using Data-Driven Swarm Intelligence Based Ensemble Model. Energy and Buildings 231: 110558. Liang, W., J. Yao, A. Chen, Q. Lv, M. Zanin, J. Liu, S. Wong, Y. Li, J. Lu, H. Liang, et al. 2020. Early Triage of Critically Ill Covid-19 Patients Using Deep Learning. Nature Communications 11 (1): 1–7. Lou, R., Z. Lv, S. Dang, T. Su, and X. Li. 2021. Application of Machine Learning in Ocean Data. Multimedia Systems: 1–10. https://doi.org/10.1007/s00530-020-00733-x. Lowe, M.D. 1990. Alternatives to the Automobile: Transport for Livable Cities. Ekistics 344 (345): 269–282. Lumley, J.L. 1967. The Structure of Inhomogeneous Turbulence. In Atmospheric Turbulence and Wave Propagation, ed. A.M. Yaglom and V.I. Tatarski, 166–178. Moscow: Nauka. Luxton, D.D. 2014. Recommendations for the Ethical Use and Design of Artificial Intelligent Care Providers. Artificial Intelligence in Medicine 62 (1): 1–10. ISSN 0933-3657. https:// doi.org/10.1016/j.artmed.2014.06.004. https://www.sciencedirect.com/science/article/pii/ S0933365714000682. Manoli, G., S. Fatichi, M. Schl̈apfer, K. Yu, T.W. Crowther, N. Meili, P. Burlando, G.G. Katul, and E. Bou-Zeid. 2019. Magnitude of Urban Heat Islands Largely Explained by Climate and Population. Nature 573: 55–60. Masanet, E., A. Shehabi, N. Lei, S. Smith, and J. Koomey. 2020. Recalibrating Global Data Center Energy-Use Estimates. Science 367 (6481): 984–986. McKeon, B.J., and A.S. Sharma. 2010. A Critical-Layer Framework for Turbulent Pipe Flow. Journal of Fluid Mechanics 658: 336–382. Menad, N.A., A. Hemmati-Sarapardeh, A. Varamesh, and S. Shamshirband. 2019. Predicting Solubility of CO2 in Brine by Advanced Machine Learning Systems: Application to Carbon Capture and Sequestration. Journal of CO2 Utilization 33: 83–95. Milojevic-Dupont, N., and F. Creutzig. 2021. Machine Learning for Geographically Differentiated Climate Change Mitigation in Urban Areas. Sustainable Cities and Society 64: 102526.
92
B. Sirmacek et al.
Miotto, R., L. Li, B.A. Kidd, and J.T. Dudley. 2016. Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records. Scientific Reports 6 (1): 1–10. Mokhasi, P., D. Rempfer, and S. Kandala. 2009. Predictive Flow-Field Estimation. Physica D 238: 290–308. Motia, S., and S. Reddy. 2021. Exploration of Machine Learning Methods for Prediction and Assessment of Soil Properties for Agricultural Soil Management: A Quantitative Evaluation. Journal of Physics: Conference Series 1950 (1): 012037. https://doi. org/10.1088/1742-6596/1950/1/012037. Murphy, K., E. Di Ruggiero, R. Upshur, D.J. Willison, N. Malhotra, J.C. Cai, N. Malhotra, V. Lui, and J. Gibson. 2021. Artificial Intelligence for Good Health: A Scoping Review of the Ethics Literature. BMC Medical Ethics 22 (1): 14. ISSN 1472-6939. https://doi.org/10.1186/ s12910-021-00577-8. Nerini, F.F., B. Sovacool, N. Hughes, L. Cozzi, E. Cosgrave, M. Howells, M. Tavoni, J. Tomei, H. Zerriffi, and B. Milligan. 2019. Connecting Climate Action with Other Sustainable Development Goals. Nature Sustainability 2 (8): 674–680. Nijhawan, R., M. Rishi, A. Tiwari, and R. Dua. 2019. A Novel Deep Learning Framework Approach for Natural Calamities Detection. In Information and Communication Technology for Competitive Strategies, 561–569. Springer. O. Publishing. 2018. Health at a Glance: Europe 2018: State of Health in the EU Cycle. Organisation for Economic Co-operation and Development OECD. ———. 2020. Trustworthy AI in Health. Organisation for Economic Co-operation and Development OECD. O’neil, C. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books. ISBN 978-0-14-198541-1. Obermeyer, Z., B. Powers, C. Vogeli, and S. Mullainathan. 2019. Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science 366 (6464): 447–453. https://doi. org/10.1126/science.aax2342. https://www-science-org.focus.lib.kth.se/lookup/doi/10.1126/ science.aax2342. Palomares, I., E. Martınez-Camara, R. Montes, P. Garcıa-Moral, M. Chiachio, J. Chiachio, S. Alonso, F.J. Melero, D. Molina, B. Fernandez, C. Moral, R. Marchena, J.P. de Vargas, and F. Herrera. 2021. A Panoramic View and Swot Analysis of Artificial Intelligence for Achieving the Sustainable Development Goals by 2030: Progress and Prospects. Applied Intelligence (Dordrecht, Netherlands) 51 (9): 6497–6527. ISSN 1573-7497. https://doi.org/10.1007/ s10489-021-02264-y. Panayides, A.S., A. Amini, N.D. Filipovic, A. Sharma, S.A. Tsaftaris, A. Young, D. Foran, N. Do, S. Golemati, T. Kurc, et al. 2020. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE Journal of Biomedical and Health Informatics 24 (7): 1837–1857. Petersen, S.E., M. Abdulkareem, and T. Leiner. 2019. Artificial Intelligence Will Transform Cardiac Imaging – Opportunities and Challenges. Frontiers in Cardiovascular Medicine 6: 133. ISSN 2297-055X. https://doi.org/10.3389/fcvm.2019.00133. https://www.frontiersin.org/ article/10.3389/fcvm.2019.00133. Pibre, L., M. Chaumon, G. Subsol, D. Lenco, and M. Derras. 2017. How to Deal with Multi- source Data for Tree Detection Based on Deep Learning. In 2017 IEEE Global Conference on Signal and InformationProcessing (GlobalSIP), 1150–1154. https://doi.org/10.1109/ GlobalSIP.2017.8309141. Pooley, C. 2016. Mobility, Transport and Social Inclusion: Lessons from History. Social Inclusion 4 (3): 100–109. Powles, J., and H. Hodson. 2017. Google DeepMind and Healthcare in an Age of Algorithms. Health and Technology 7 (4): 351–367. ISSN 2190-7196. https://doi.org/10.1007/s12553-017-0179-1. Rajkomar, A., E. Oren, K. Chen, A.M. Dai, N. Hajaj, M. Hardt, P.J. Liu, X. Liu, J. Marcus, M. Sun, et al. 2018. Scalable and Accurate Deep Learning with Electronic Health Records. NPJ Digital Medicine 1 (1): 1–10.
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
93
Raza, M.Q., and A. Khosravi. 2015. A Review on Artificial Intelligence Based Load Demand Forecasting Techniques for Smart Grid and Buildings. Renewable and Sustainable Energy Reviews 50: 1352–1372. Reichstein, M., G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, and N. Carvalhais. 2019. The National Energy Research Supercomputing Center in Lawrence Berkeley National Laboratory, Berkeley, CA, USA: Deep Learning and Process Understanding for Data-Driven Earth System Science. Nature 566: 195–204. Romm, J. 2018. Climate Change: What Everyone Needs to Know®. Oxford University Press. Rutqvist, D., D. Kleyko, and F. Blomstedt. 2019. An Automated Machine Learning Approach for Smart Waste Management Systems. IEEE Transactions on Industrial Informatics 16 (1): 384–392. Saba, L., M. Biswas, H.S. Suri, K. Viskovic, J.R. Laird, E. Cuadrado-Godia, A. Nicolaides, N. Khanna, V. Viswanathan, and J.S. Suri. 2019. Ultrasound-Based Carotid Stenosis Measurement and Risk Stratification in Diabetic Cohort: A Deep Learning Paradigm. Cardiovascular Diagnosis and Therapy 9 (5): 439. Sasaki, K., A.V.G. Vinuesa, R. Cavalieri, P. Schlatter, and D.S. Henningson. 2019. Transfer Functions for Flow Predictions in Wall-Bounded Turbulence. Journal of Fluid Mechanics 864: 708–745. Schneider, T., S. Lan, A. Stuart, and J. Teixeira. 2017. Earth System Modeling 2.0: A Blueprint for Models That Learn from Observations and Targeted High-Resolution Simulations. Geophysical Research Letters 44 (24): 12–396. Selleneit, V., M. Stockl, and U. Holzhammer. 2020. System Efficiency–Methodology for Rating of Industrial Utilities in Electricity Grids with a High Share of Variable Renewable Energies – A First Approach. Renewable and Sustainable Energy Reviews 130: 109969. Shaban-Nejad, A., M. Michalowski, and D.L. Buckeridge. 2018. Health Intelligence: How Artificial Intelligence Transforms Population and Personalized Health. NPJ Digital Medicine 1: 53. Shahroz, M., F. Ahmad, M.S. Younis, N. Ahmad, M.N.K. Boulos, R. Vinuesa, and J. Qadir. 2021. Covid-19 Digital Contact Tracing Applications and Techniques: A Review Post Initial Deployments. Transportation Engineering 5: 100072. Sharma, A., A. Jain, P. Gupta, and V. Chowdary. 2020. Machine Learning Applications for Precision Agriculture: A Comprehensive Review. IEEE Access 9: 4843–4873. Shinners, L., C. Aggar, S. Grace, and S. Smith. 2021. Exploring Healthcare Professionals’ Perceptions of Artificial Intelligence: Validating a Questionnaire Using the e-Delphi Method. Digital Health 7: 20552076211003433. ISSN 2055-2076. https://doi. org/10.1177/20552076211003433. Shorten, G. 2019. Artificial Intelligence and Training Physicians to Perform Technical Procedures. JAMA Network Open 2 (8): e198375–e198375. Social Exclusion Unit. 2003. Making the Connections: Final Report on Transport and Social Exclusion. Solano, J., L. Olivieri, and E. Caamaño-Martín. 2017. Assessing the Potential of PV Hybrid Systems to Cover HVAC Loads in a Grid-Connected Residential Building Through Intelligent Control. Applied Energy 206: 249–266. Srinivasan, P.A., L. Guastoni, H. Azizpour, P. Schlatter, and R. Vinuesa. 2019. Predictions of Turbulent Shear Flows Using Deep Neural Networks. Physical Review Fluids 4: 054603. Stokes, J.M., K. Yang, K. Swanson, W. Jin, A. Cubillos-Ruiz, N.M. Donghia, C.R. MacNair, S. French, L.A. Carfrae, Z. Bloom-Ackermann, et al. 2020. A Deep Learning Approach to Antibiotic Discovery. Cell 180 (4): 688–702. Stuck, M., A. Vidal, P. Torres, H.M. Nagib, C. Wark, and R. Vinuesa. 2021. Spectral-Element Simulation of the Turbulent Flow in an Urban Environment. Applied Sciences 11: 6472. Suo, Q., H. Xue, J. Gao, and A. Zhang. 2016. Risk Factor Analysis Based on Deep Learning Models. In Proceedings of the 7th ACM International Conference on Bioinformatics, Computational Biology, and HealthInformatics, 394–403.
94
B. Sirmacek et al.
Suzuki, T., and Y. Hasegawa. 2017. Estimation of Turbulent Channel Flow at Reθ= 100 Based on the Wall Measurement Using a Simple Sequential Approach. Journal of Fluid Mechanics 830: 760–796. Theyazn, A., M. Al-Yaari, H. Alkahtani, and M. Maashi. 2020. Water Quality Prediction Using Artificial Intelligence Algorithms. Applied Bionics and Biomechanics 1–12 (12): 2020. https:// doi.org/10.1155/2020/6659314. Traore, B.B., B. Kamsu-Foguem, and F. Tangara. 2017. Data Mining Techniques on Satellite Images for Discovery of Risk Areas. Expert Systems with Applications 72: 443–456. Tripathi, S., V. Srinivas, and R.S. Nanjundiah. 2006. Downscaling of Precipitation for Climate Change Scenarios: A Support Vector Machine Approach. Journal of Hydrology 330 (3–4): 621–640. Ullah, Z., F. Al-Turjman, L. Mostarda, and R. Gagliardi. 2020. Applications of Artificial Intelligence and Machine Learning in Smart Cities. Computer Communications 154: 313–323. UN General Assembly (UNGA). 2015. Transforming Our World: The 2030 Agenda for Sustainable Development. Resolut. A/RES/70/1 25: 1–35. Venayagamoorthy, G.K. 2009. Potentials and Promises of Computational Intelligence for Smart Grids. In 2009 IEEE Power & Energy Society General Meeting, 1–6. IEEE. Verghese, A., N.H. Shah, and R.A. Harrington. 2018. What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA 319 (1): 19–20. ISSN 0098-7484. https://doi. org/10.1001/jama.2017.19198. Vinuesa, R., and B. Sirmacek. 2021. Interpretable Deep-Learning Models to Help Achieve the Sustainable Development Goals. Nature Machine Intelligence 3: 926, 2021. https://doi. org/10.1038/s42256-021-00414-y. Vinuesa, R., P. Schlatter, J. Malm, C. Mavriplis, and D.S. Henningson. 2015. Direct Numerical Simulation of the Flow Around a Wall-Mounted Square Cylinder Under Various Inflow Conditions. Journal of Turbulence 16: 555–587. Vinuesa, R., H. Azizpour, I. Leite, M. Balaam, V. Dignum, S. Domisch, A. Felländer, S.D. Langhans, M. Tegmark, and F. Fuso Nerini. 2020a. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 233. ISSN 2041-1723. https:// doi.org/10.1038/s41467-019-14108-y. https://www.nature.com/articles/s41467-019-14108-y. Vinuesa, R., A. Theodorou, M. Battaglini, and V. Dignum. 2020b. A Socio-technical Framework for Digital Contact Tracing. Results in Engineering 8: 100163. ISSN 2590-1230. https:// doi.org/10.1016/j.rineng.2020.100163. https://www.sciencedirect.com/science/article/pii/ S2590123020300694. Vuchic, V.R. 2017. Transportation for Livable Cities. Routledge. Vulova, S., F. Meier, A.D. Rocha, J. Quanz, H. Nouri, and B. Kleinschmit. 2021. Modeling Urban Evapotranspiration Using Remote Sensing, Flux Footprints, and Artificial Intelligence. Science of the Total Environment 786: 147293. ISSN 0048-9697. https://doi.org/10.1016/j.scitotenv.2021.147293. https://www.sciencedirect.com/science/article/pii/S0048969721023640. W. H. Organization. 2006. The World Health Report 2006: Working Together for Health. World Health Organization. W. H. Organization, et al. 2012. High-Level Technical Meeting to Address Health Risks at the Human-Animal Ecosystems Interfaces: Mexico City, Mexico, 15–17 November 2011. ———. 2016a. Global Strategy on Human Resources for Health: Workforce 2030. WHO. ———. 2016b. Working for Health and Growth: Investing in the Health Workforce. WHO. W.H.O. 2019. WHO Guideline: Recommendationson Digital Interventions for Health System Strengthening. World Health Organization (WHO). ISBN 978-92-4-155050-5. http://www.who. int/reproductivehealth/publications/digital-interventions-health-system-strengthening/en/. Wakunuma, K., T. Jiya, and S. Aliyu. 2020. Socio-ethical Implications of Using AI in Accelerating SDG3 in Least Developed Countries. Journal of Responsible Technology 4: 100006. ISSN 2666-6596. https://doi.org/10.1016/j.jrt.2020.100006. https://www.sciencedirect.com/science/ article/pii/S2666659620300068.
The Potential of Artificial Intelligence for Achieving Healthy and Sustainable Societies
95
Wang, J.-J., Y.-Y. Jing, C.-F. Zhang, and J.-H. Zhao. 2009. Review on Multi-criteria Decision Analysis Aid in Sustainable Energy Decision-Making. Renewable and Sustainable Energy Reviews 13 (9): 2263–2278. Wang, S., Z. Liu, Y. Rong, B. Zhou, Y. Bai, W. Wei, M. Wang, Y. Guo, and J. Tian. 2019. Deep Learning Provides a New Computed Tomography-Based Prognostic Biomarker for Recurrence Prediction in High-Grade Serous Ovarian Cancer. Radiotherapy and Oncology 132: 171–177. Wang, B., B. Xie, J. Xuan, and K. Jiao. 2020a. AI-Based Optimization of Pem Fuel Cell Catalyst Layers for Maximum Power Density Via Data-Driven Surrogate Modeling. Energy Conversion and Management 205: 112460. Wang, Z., K. Jiang, P. Yi, Z. Han, and Z. He. 2020b. Ultra-dense Gan for Satellite Imagery Super- Resolution. Neurocomputing 398: 328–337. ISSN 0925-2312. https://doi.org/10.1016/j. neucom.2019.03.106. West, S.M., M. Whittaker, and K. Crawford. 2019. Discriminating Systems. AI Now. Wichmann, A., A. Agoub, and M. Kada. 2018. Roofn3d: Deep Learning Training Data for 3d Building Reconstruction. ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2: 1191–1198. https://doi.org/10.5194/ isprs-archives-XLII-2-1191-2018. Wilson, M., J. Paschen, and L. Pitt. 2021. The Circular Economy Meets Artificial Intelligence (AI): Understanding the Opportunities of AI for Reverse Logistics. Management of Environmental Quality: An International Journal 33 (1): 9–25. Wu, Y., A. Sharifi, P. Yang, H. Borjigin, D. Murakami, and Y. Yamagata. 2018. Mapping Building Carbon Emissions Within Local Climate Zones in Shanghai. Energy Procedia 152: 815–822. Wu, C., Y. Chen, C. Peng, Z. Li, and X. Hong. 2019. Modeling and Estimating Aboveground Biomass of Dacrydium pierrei in China Using Machine Learning with Climate Change. Journal of Environmental Management 234: 167–179. Xu, Y., A. Hosny, R. Zeleznik, C. Parmar, T. Coroller, I. Franco, R.H. Mak, and H.J. Aerts. 2019. Deep Learning Predicts Lung Cancer Treatment Response from Serial Medical Imaging. Clinical Cancer Research 25 (11): 3266–3275. Yan, B., F. Hao, and X. Meng. 2021. When Artificial Intelligence Meets Building Energy Efficiency, a Review Focusing on Zero Energy Building. Artificial Intelligence Review 54 (3): 2193–2220. Yang, J., P. Gong, R. Fu, M. Zhang, J. Chen, S. Liang, B. Xu, J. Shi, and R.E. Dickinson. 2013. The Role of Satellite Remote Sensing in Climate Change Studies. Nature Climate Change 3: 875–883. Yang, Y., H. Guan, O. Batelaan, T.R. McVicar, D. Long, S. Piao, W. Liang, B. Liu, Z. Jin, and C.T. Simmons. 2016. Contrasting Responses of Water Use Efficiency to Drought Across Global Terrestrial Ecosystems. Scientific Reports 6 (1): 1–8. Yang, H., S. Piao, C. Huntingford, S. Peng, P. Ciais, A. Chen, G. Zhou, X. Wang, M. Gao, and J. Zscheischler. 2019. Strong But Intermittent Spatial Covariations in Tropical Land Temperature. Geophysical Research Letters 46 (1): 356–364. Zayyad, M.A., and M. Toycan. 2018. Factors Affecting Sustainable Adoption of e-Health Technology in Developing Countries: An Exploratory Survey of Nigerian Hospitals from the Perspective of Healthcare Professionals. PeerJ 6: e4436. ISSN 2167-8359. https://doi. org/10.7717/peerj.4436. https://peerj.com/articles/4436. Zeng, D., Z. Cao, and D.B. Neill. 2021. Artificial Intelligence–Enabled Public Health Surveillance – From Local Detection to Global Epidemic Monitoring and Control. In Artificial Intelligence in Medicine, 437–453. Elsevier. Zhang, K., P. Xu, T. Gao, and J. Zhang. 2021a. A Trustworthy Framework of Artificial Intelligence for Power Grid Dispatching Systems. In 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence (DTPI), 418–421. IEEE. Zhang, Y., P. Geng, C. Sivaparthipan, and B.A. Muthu. 2021b. Big Data and Artificial Intelligence Based Early Risk Warning System of Fire Hazard for Smart Cities. Sustainable Energy Technologies and Assessments 45: 100986. ISSN 2213-1388. https://doi.org/10.1016/j. seta.2020.100986. https://www.sciencedirect.com/science/article/pii/S2213138820314144.
96
B. Sirmacek et al.
Zhao, J., T. Wang, M. Yatskar, V. Ordonez, and K.-W. Chang. 2017. Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-Level Constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2979–2989. https:// doi.org/10.1038/d41586-018-05707-810.18653/v1/D17-1323. https://github.com/uclanlp/ reducingbias. Zou, J., and L. Schiebinger. 2018. AI Can Be Sexist and Racist – It’s Time to Make It Fair. Nature 559 (7714): 324–326. ISSN 0028-0836. https://doi.org/10.1038/d41586-018-05707-8. http:// www.nature.com/articles/d41586-018-05707-8.
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced Inequalities in a Post-COVID World Margaret A. Goralski and Tay Keong Tan
Abstract On October 25, 2015, the General Assembly of the United Nations (UN) set forth an agenda which included 17 Sustainable Development Goals (SDGs) and 169 targets to transform the world by 2030. The agenda set forth a plan of action that recognized a myriad of challenges which, if surmounted, could empower people, benefit the planet, and create an impetus for worldwide prosperity. Due to the coronavirus pandemic and its economic and social fallout, the world today is not on track to attain the SDGs by the year 2030. However, the disruptive impact of the pandemic on many areas of life among other things was in a sense a “game changer” with respect to our (human) approaches to artificial intelligence (AI) and to AI itself. The global pandemic caused a major shift with regard to AI. It revealed that in this day and time AI is a necessity for the flourishing of humanity worldwide. It is no longer a luxury. Developed and developing countries alike were caught unaware by the COVID disruption. All experienced gaps in healthcare and education delivery and increased poverty in one form or another. In this situation, AI turned out to be not merely useful, it quickly proved itself to be indispensable. In a world that is still struggling to recover from the pandemic, AI has and will continue to play a major role in transforming the work of poverty alleviation, hence affecting the advancement of the poverty-related SDGs. The chapter will present examples of AI implementation in areas of the world where poverty is significant: China, India, and two countries in Africa. It will look at rural poverty specifically, although urban poverty is growing at expediential rates, and examine how AI has affected the work of alleviating poverty through improving healthcare delivery and strengthening access to education. The analysis will delve into the advancement of specific SDGs with the use of AI, such as SDG #1 no M. A. Goralski (*) Quinnipiac University, Hamden, CT, USA e-mail: [email protected] T. K. Tan Radford University, Radford, VA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_6
97
98
M. A. Goralski and T. K. Tan
poverty, SDG #3 good health and well-being, SDG #4 quality education, and SDG #10 reduced inequalities. Finally, this chapter will draw policy implications for the work of fighting extreme poverty in a post-COVID and increasingly AI-enabled world. Keywords Artificial intelligence · Poverty alleviation · Global pandemic
1 The UN SDGs The SDGs were meant to stimulate action and define the critical importance of each goal as well as the interrelatedness of the 17 goals. No poverty (end poverty in all its forms everywhere) is the first of the SDGs. When the UN General Assembly gathered in 2015, there were rising inequalities and disparities of opportunity, wealth, and power in many countries and communities around the world. The preamble to the UN SDGs document states: “We recognize that eradicating poverty in all its forms and dimensions, including extreme poverty, is the greatest global challenge and an indispensable requirement for sustainable development” (Transforming our world: the 2030 Agenda for Sustainable Development 2015, 1). While millions have escaped extreme poverty, many more remain trapped. Members of the General Assembly believed that with a spread of information and technology, and the interconnectedness of the world, there was a great potential to “bridge the digital divide” and develop universal knowledge of technological innovations in medicine, education, and other fields (Transforming our world: the 2030 Agenda for Sustainable Development 2015). Poverty has been a challenge to human societies throughout history. It has been reduced but never diminished to the point where the world could declare an eradication of this malaise. Based on the UN SDGs Report (2021), the global poverty rate is expected to rise to 7% or approximately 600 million people in 2030. Thus, SDG #1 will be missing the target goal of eradicating poverty or achieving “no poverty” by that year. Extreme poverty1 has risen from 8.4% in 2019 to 9.5% in 2020 mostly due to the worldwide impact of the COVID-19 pandemic (United Nations Sustainable Development Goals Report 2021 2021, 28). This was the first rise in global extreme poverty in a generation. Governmental social protection measures in 2020 covered only about 46.9% of the global population, leaving approximately four billion people with no social The World Bank updated the nominal poverty line from $1.25 to $1.90 per day in 2015. The change in dollar value of the line reflects changes in the estimated purchasing power parity (PPP) of the dollar in poor countries. The line seeks to keep the real value constant even though relative prices change. The PPP exchange rates allow a comparison of the prices of goods and services across countries. This same poverty line is used by the United Nations and others to track progress in the elimination of extreme poverty and to measure the accountability of the international community in reporting progress (Principles and Practice in Measuring Global Poverty 2016). 1
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced…
99
safety net (United Nations Sustainable Development Goals Report 2021 2021, 29). The uneven access to essential public services, like healthcare and vaccines, further exacerbated the problems of the poorest of the poor and widened the inequality gap in countries and communities.
2 A Brief Exploration of Various Theories of Poverty There have been two major approaches to the sustenance problem (preventing or alleviating poverty, especially famine) throughout history. One approach was to secure enough food for the existing population by what would be called today a “rainy day fund.” This approach can be illustrated by the Biblical story about Joseph advising Pharaoh how to prepare the land of Egypt for the imminent 7 years of disastrous crops by saving food for the future (Genesis 41: 25–36). The other approach was that represented by Plato (in The Republic) and Aristotle (in The Politics), who both focused on adjusting the size of human population to the amount of available food. Both of these approaches survived for centuries in Western tradition. Due to new ways of thinking in economics and philosophy in the mid-eighteenth century, the First Poverty Enlightenment occurred near the end of the century. It rejected the view that inequalities were inevitable and brought about a new respect for poor people (Ravallion 2016). The economy became a tool for advancing human welfare and included poor people. Adam Smith, Scottish economist and philosopher, was instrumental in this incorporation of human welfare into the economy (Ravallion 2016). In the 1960s and 1970s, a comprehensive anti-poverty policy was put into place that viewed poverty as unacceptable. Poverty was no longer viewed as inevitable, but instead as something that society could eliminate (Ravallion 2016). (For a more detailed study of poverty see Ravallion (2016)). During the time of the Industrial Revolution, the views of British economist Thomas Robert Malthus (1776–1834) enjoyed great popularity. Malthus, clearly in agreement with Plato and Aristotle, argued that the power of population to increase is infinitely greater than the power of the earth to produce subsistence; therefore, if population increases faster than the food supply, there will necessarily follow famine and poverty. If the population is left unchecked, then it will increase in a geometrical ratio. However, subsistence increases only in an arithmetical ratio, therefore, the numbers would clearly show the immensity of the first power in comparison to the second (Malthus 1803). Although the theory of Malthus is not as specific as that of Plato or Aristotle, he states that like plants and animals in nature, if human population increases without subsistence to nourish it and room to grow, then population will never be able to increase beyond the lowest nourishment capable of supporting it. Therefore, the intense human need to continuously acquire food for sustenance would necessarily be severely felt proliferating various forms of misery, famine, and fear. Malthus argued that the Parish Laws of England (Poor Laws) had contributed to raise the price of provisions and lowered the price of labor, thus contributing to
100
M. A. Goralski and T. K. Tan
impoverishment since labor was the only possession of the poor. He believed that money cannot raise a poor man and enable him to live a better life without proportionately depressing others within the same class. If the poor were given uncultivated land, and made to produce upon the land, then man and other members of society would benefit. However, if a poor man was given money, and the food production of the country were to remain the same, then that man has only been given a larger share of the produce which he cannot receive without diminishing the shares of produce for others in society (Malthus 1803). Ester Boserup (1910–1999), Danish economist, proposed a theory that challenged the theory of Malthus. She argued that agricultural developments are caused by population trends, not the other way around (Boserup 1970). Boserup believed that as population pressure increases, agricultural technology would result from population changes, therefore factoring innovation into the solution. Her theory was concerned with the effects of population on changes to agriculture, not on the causes of population growth. Boserup’s approach was in alignment with Joseph’s. It basically states that a nation needs to produce food enough to sustain its population for the future. There is also a third approach which was developed in recent times, i.e., since the Industrial Revolution, by the people who claim that proper application of sophisticated technologies could lead to the alleviation of the sustainability problem. The most powerful and promising technological tool that humankind possesses presently is AI.
3 The Relevant Connection of AI to Poverty Alleviation [AI] is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. (McCarthy 2004)
McCarthy’s was one of the first definitions of AI; however, as technology has evolved over time, various alternate definitions of AI have been created. One of the most recent, and the one used in this chapter, is that of Amazon, “Artificial Intelligence (AI) is the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition” (What is Artificial Intelligence? n.d.). There are currently three general categories of AI – artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI) (Goralski and Tan, Artificial intelligence and sustainable development 2020). AI, which is currently in use, is in the category of ANI. Some examples include Google’s Alexa, Apple’s Siri, and IBM’s Watson (Artificial Intelligence (AI) 2020). To provide concrete examples for our analyses of the impact of AI on world poverty and development issues, we include studies of emerging practices and new technology applications in two sectors that are important to those who are living in extreme poverty – healthcare and education.
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced…
101
4 Healthcare Healthy lives and the promotion of well-being for people of all ages is SDG #3. Recent AI innovations in healthcare and education could bring profound changes to the alleviation of poverty in most of the world, in particular in rural areas. Unfortunately, advances in healthcare via AI have been slow to migrate into poverty- stricken areas. This section focuses on rural healthcare in China and two countries in Africa: Malawi, and Nigeria where much poverty still exists. It will draw connections between some AI applications in healthcare and solutions in the field. According to the UN Population Division and the World Bank, the estimated population of China in 2020 is 1,402,112,000 people. The rural population is 38.57% (Rural population China 2020) of that total. The approximate population of Malawi is 19,129,952 people. The rural population is 82.57% in 2020 (Rural population Malawi 2020). Nigeria has a population of approximately 206,139,590 people in 2020 with a rural population of approximately 48.04% (Rural Population – Nigeria 2019). The World Health Organization’s data on nursing and midwifery showed a global shortage of healthcare workers, specifically midwives and nurses, who make up approximately 50% of the shortage. The largest shortages are in Southeast Asia and Africa (Nursing and midwifery 2020). The number of practicing physicians available in rural areas is approximately 2.0 per 1000 people in China as of 2017 (Physicians (per 1000 people) 2020) and 0.4 physicians per thousand in Nigeria (Physicians (per 1000 people) – Nigeria 2018); there are no statistics available for the percentage of doctors per thousand for Malawi. Besides inadequate access to qualified healthcare and shortages of nurses/midwifes and doctors, there are several inherent challenges that limit the access of people in rural areas from receiving adequate healthcare services and fully utilizing advanced technology in medical care: lack of transportation to where healthcare is available, or an inability to pay for transportation if it is available; lack of access to reliable electrical power supplies and broadband access to the Internet; and inadequate education and training of medical workers. AI can help assuage this lack of medical treatment and access to other advanced technology in the healthcare industry by alleviating the disparity in healthcare services between urban and rural populations. AI-based data collection could identify people with symptoms and create realistic solutions that allow healthcare providers to develop treatment pathways that were not available in the past (Kopparapu and Kopparupu 2020). AI-powered diagnostics could use a patient’s unique history as a baseline against which small deviations would flag possible health conditions that need further investigation and treatment. There are three levels of medical AI that could alleviate the problems of healthcare in remote areas if factored into developmental plans: basic, moderate, and high level, but these three levels would usually require the involvement of government, policymakers, technology-equipment manufacturers, and healthcare workers from
102
M. A. Goralski and T. K. Tan
baseline rural communities to connect with the top-ranked hospitals in urban areas (Guo and Li 2018). China offers an illustrative example of the levels of healthcare highlighted above. Although it has emerged as a global economic superpower in recent decades, there is still a huge relatively poor rural population and unequal distribution of healthcare. There are 300 million people in China suffering from chronic diseases (Ho 2018) like heart disease, strokes, diabetes, and chronic lung diseases. As the world’s second-largest economy, China’s cradle-to-grave system of socialized medicine has improved life expectancy and maternal mortality rates. However, this lengthening of life and reduction in mortality rates is taxing the healthcare system to a point where it cannot support its population (Wee 2018). In 2016, Chinese Premier Xi Jinping set forth a blueprint to improve healthcare services and called it “Healthy China 2030.” It was to strengthen health innovation and make medical treatment available for all. The Chinese government formed a collaborative platform with technology firms and healthcare providers to promote innovative ideas and to highlight new projects in “intelligent medicine.”2 Beginning in January 2018, a rural dweller could have an electrocardiograph and blood test conducted in a village AI clinic to be reviewed by a doctor at a big city hospital that is located miles away. These new healthcare services would prevent patients from having to travel and stand in long lines hoping to see a physician after many hours of waiting. The three-tier system for delivery of rural health services consists of county-level hospitals, township healthcare facilities, and village clinics. As of 2018, 349 villages in Henan province received mobile all-in-one diagnostic stations that are highly transportable, can conduct 11 common medical tests, and automatically upload data for online consultation (Dai 2018). This is part of the rural healthcare program cooperative agreement between the Chinese government authorities and one of the top Chinese tech giants, Tencent.3 The government has engendered an oligopolistic marketplace that fosters competition between healthcare technology firms like Good Doctor and We Doctor to serve the rural populations and innovate new systems and services in the healthcare industry. We Doctor educates village medical workers on the use of AI equipment. Through it, medical records are automatically uploaded and generate a diagnosis, which is then reviewed and referenced by a doctor at an urban hospital. Good Doctor is an extension of a financial conglomerate Ping An Insurance Group, which is Intelligent medicine is a term that originated with Dr. Ronald L. Hoffman in his book of the same title. Hoffman defined the term as a complete spectrum of healthcare options, but the term has evolved since his book was published in 1997. The State Council of China defines it as integration of artificial intelligence (AI) technology with medical care to improve healthcare services. For more information see Lung (2018, May 8) China launches national association to speed up integration of AI with healthcare. https://opengovasia.com/china-launches-national-association-to-speed-upintegration-of-ai-with-healthcare/ 3 Tencent has a broad portfolio of interests similar to Google’s parent company Alphabet. First quarter earnings 2020 showed revenue of 108 billion Chinese yuan (US $15.2 billion). For more information see Kleinman (2020, August 7). What is Tencent? BBC News. https://www.bbc.com/ news/technology-53696743 2
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced…
103
constructing smart clinics with remote consulting services. Diagnostics are powered by AI, healthcare workers are trained in rural villages, and the health statistics gathered in Chinese rural areas creates a foundation for the development and adoption of these new innovations and similar AI-driven applications for the future. With Chinese companies already established relationships in areas of Africa through President Xi Jinping’s Belt and Road Initiative (BRI),4 AI technology is expected to be marketed to other rural populations in Africa, South Asia, and other parts of the world as part of its long-term development plan. However, there are worries that China’s infrastructure investments may lay a debt trap for governments in the future (Chatzky and McBride 2020). According to the World Health Organization, Africa carries 25% of the world’s disease burden but its share of global health expenditures is less than 1%. African governments need to provide access to basic healthcare and train more community health workers. AI again can help mitigate this problem. We highlight two countries in Africa, Malawi and Nigeria. Malawi is a landlocked country in the southeastern area of Africa. The challenges are inequitable distribution of resources, fragmented services, and shortages of staff. Approximately 28% of Malawi’s economy is based on agriculture, fishing, and forestry (Makwero 2018). It is among five sub-Saharan African countries that present a very high maternal mortality rate (Yaya et al. 2016). A brief explanation of the four levels of the Malawi healthcare system follows (Makwero 2018): • The District Health Management Team (DHMT) operates from a district hospital. It monitors and evaluates the district healthcare activities. • Health centers are staffed by nurses and medical assistants or clinical officers (mid-level practitioners). Nurses largely deal with primary maternal and child health services. • The community links with the primary care facility via a team of health surveillance assistants (HSAs), community health workers (CHWs), and traditional healers. (HSAs receive 6 weeks of initial health preservice training and usually reside in the community.) • In the community HSAs are seen as “doctors” (Makwero 2018). Malawi’s healthcare delivery system is based on primary healthcare (PHC) (Makwero 2018). The study of Yaya et al. (2016) examined the impact of wealth inequality on maternal healthcare services. They conclude that the high mortality rate from maternity hinders Malawi from achieving the maternal health-related mandates of the UN SDGs and recommend an equity-based policy to include
China’s Belt and Road Initiative (BRI) (considered by some to be the New Silk Road) is a vast collection of development and investment projects that would eventually stretch from East Asia to Europe expanding the political and economic influence of China. Development of the Asia-Africa Growth Corridor (AAGC) is a part of this BRI expansion plan. For more information refer to the Council on Foreign Relations “China’s Massive Belt and Road Initiative” 2020 January 28. https:// www.cfr.org/backgrounder/chinas-massive-belt-and-road-initiative 4
104
M. A. Goralski and T. K. Tan
education in rural areas and solutions to issues related to a quality gap in the maternal healthcare services (MHS) in urban vs. rural areas. An article on faith-based provision of sexual and reproductive healthcare in Malawi (the second largest healthcare providers) stated that faith-based providers were less likely to share the national family planning guidelines than public providers (Tafesse and Chalkley 2021). This specifically was the case with family planning methods, condom promotion, HIV prevention, and dissemination of information on sexually transmitted infections (STIs) (Tafesse and Chalkley 2021). Faith-based providers deliver approximately 70% of services in Africa. AI in the form of mobile phones has been incorporated into Malawi health centers to provide crucial healthcare services to people in rural areas through text messaging. A person’s mobile phone becomes a microcosmic health clinic – a small representative system within a larger system (Oyaro 2016–2017). The basic mobile phone becomes a clinic; hence, a patient can get the information needed from a doctor without having to travel to a clinic. Text message services give reminders about taking medication and tips about how to live a healthier life. It is convenient and easy for a patient to connect with a healthcare provider at any time of the day and is especially helpful for pregnant women. They can receive prenatal and postnatal information as well as general health information like using mosquito nets to prevent malaria, the risk of mother to child HIV transmission, and general healthcare advice. AirTel, a mobile phone company, supports the system and serves more than 500,000 mothers and children (Oyaro 2016–2017). There were 2.81 million Internet users in Malawi in January 2020 and 8.58 million mobile connections; This is equal to approximately 45% of the population (Kemp 2020). Another study found that most respondents owned or had use of a basic mobile phone even though there was some inequality in access by region (Marron et al. 2020). The Malawi government, which is trying to improve the maternal mortality rates has fully endorsed this innovative way of providing healthcare remotely. This AI implementation fully supports SDG #3 to ensure healthy lives and promote well- being of all ages. AI has been able to provide specific information and care to women and children without bias and without inflicting the inconveniences of travel to a rural clinic. Nigeria is a west African country located on the Gulf of Guinea. It is potentially one of the wealthiest countries in Africa due primarily to its large oil resources. Unfortunately, the economy is devastated by domestic unrest. More than 60% of the Nigerian populace live in rural areas with extreme shortages of healthcare facilities and practitioners due to location isolation and lack of opportunity (Olaronke and Oluwaseun 2016). Access to healthcare is a struggle due to underfunded national health systems, a lack of basic infrastructure – clean water and electricity – and a shortage of healthcare workers (Tafirenyika 2016–2017). High maternal and child mortality rates in most of Africa (just like the two countries discussed here) remain a major concern. Infections related to the delivery process and communicable diseases are the leading causes of death. “Every day in Nigeria, 257 babies die within their first month of life, and 40,000 women die from
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced…
105
pregnancy related causes each year” (Helping half a million pregnant women in Nigeria get better antenatal care with a portable ultrasound device 2019). Nigeria is ranked number one for maternal mortality in sub-Saharan Africa (Helping half a million pregnant women in Nigeria get better antenatal care with a portable ultrasound device 2019). General Electric (GE), a US corporation, has begun to tap into new technologies that can diagnose health conditions and diseases more efficiently and accurately (Rao and Joseph 2016–2017). One AI innovation is Vscan, a non-invasive ultrasound device the size of a mobile phone, which provides real-time high-resolution images used in medical fields such as cardiology, obstetrics, and gynecology (Rao and Joseph 2016–2017). GE, creator of Vscan, along with the US and Nigerian governments invested $20 million on its Healthymagination Mother & Child Initiative (HCMI) to screen mothers in Nigeria to identify at-risk pregnancies (Lawrence 2016). It is an asset in prenatal and antenatal care for mothers who do not have access to a healthcare facility in their rural location (Rao and Joseph 2016–2017). It is easy for midwives and healthcare workers to navigate with a touch screen that can detect birth defects in fetuses and monitor high-risk pregnancies to determine the position of the baby prior to birth. Since the scan is immediate and non-invasive, it is openly accepted by pregnant women and caregivers. The program began in 2017, with the expectation to help 560,000 expectant Nigerian women in rural areas by utilizing 1.1 million antenatal scans and hours of training and mentoring of midwives and antenatal primary caregivers (Helping half a million pregnant women in Nigeria get better antenatal care with a portable ultrasound device 2019). HCMI provides the scans free of charge or at a very low cost to pregnant women in rural areas. The transport expense to secondary hospitals is also averted. AI, in this example, easily fits within SDG #3, ensuring healthy lives and promoting the well-being of all. For women in Nigeria, the experience of having an ultrasound scan can now be an exciting part of their pregnancy. It allows them to see their baby prior to its birth with the knowledge that their pregnancy is being monitored for any possible problems. Additionally, a prospective mother knows that the midwife or healthcare worker has been trained to use the equipment properly. Additionally, this AI initiative meets the challenge of the SDGs as a plan of action for people.
5 Education SDG #4 is quality education. Currently, there is very little, if any, doubt that universal global education has immense importance for the flourishing of humanity. Access to education and alleviation of poverty are closely related. In poverty-ridden rural areas, there are two
106
M. A. Goralski and T. K. Tan
main obstacles (and many lesser ones, some of which are discussed briefly in this section) that significantly slow down the spread of education. Education is expensive; in most countries, poor people do not have the financial means necessary to secure even basic education for their children. This is the problem that the UN will hopefully help to solve on a global scale. The other serious obstacle is the organization of the educational process which currently is still mostly mimicking the factory model dating back to the Industrial Revolution. This model requires students (just like in the case of factory workers) to be physically present at the imposed time in the imposed place, with penalties being built into the system for not meeting this requirement. In many rural areas, it is physically very difficult or prohibitively expensive (or both) to fulfill such requirements. However, this problem may be remedied by solutions sooner than expected due to the challenge posed to education systems worldwide by the COVID-19 pandemic. There was a need to look for solutions outside of the factory model. Technology, in particular AI, proved to be a very effective and useful educational tool both in urban and rural settings. AI could bring education to children and adults, especially through the proliferation of mobile phones to reach those who were previously unable to gain access to schools. Many people in these communities already use a basic mobile phone on a regular basis; hence, AI could be adapted to this readily available platform to alleviate poverty in the world’s rural poor with necessary cooperation from that segment of the population. Educated children will probably revise their worldviews and value systems, which may result in generational gaps between children and their parents. This is a possibility not to be taken lightly because of the potentially very serious emotional distress it can cause members of both generations and the negative impact on the family dynamics. The fear of this occurrence in rural areas is often (next to the need to keep home “the working hands” of children, instead of sending them to school) the source of parents’ reluctance to support their children’s education in excess of their own. Formal education can empower girls and young women with knowledge and newfound independence that breaks them free from dependence on the men in their lives. In addition, through education girls and young women can better their own lives and benefit their community and family through better health and delayed marriage and childbirth (Brownell 2020). However, one must not treat this issue lightly since it may also create a potential negative impact on the lives of girls, especially in rural areas, for instance where difficulty may arise to find a husband, especially if males in the area are less educated. In a traditional setting (the “factory model”), a teacher’s knowledge is transferred to students by presentation or interactive activities. Students read the same textbook, share the same teacher, and learn from the same curriculum. Educational knowledge is transferred directly from one human to another (Goralski and Górniak- Kocikowska, Education in the Era of Artificial Intelligence: The will to listen as a new pedagogical challenge 2019). The quality of that education depends on the physical presence of the transferrer of knowledge and the listening ability and will
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced…
107
of the recipient (Weifeng 2019; Goralski and Górniak-Kocikowska, Education in the Era of Artificial Intelligence: The will to listen as a new pedagogical challenge 2019). AI has the power to change this age-old system of knowledge transfer. AI education could be customized to a child’s learning capability and needs (Rouhiainen 2019). It can be made interactive with the student, dynamic, and visual to enhance the learning experience (Goralski and Górniak-Kocikowska, Education in the Era of Artificial Intelligence: The will to listen as a new pedagogical challenge 2019). It can be programmed to cater to the special needs of a child who is falling behind other students. AI enhanced education for poverty alleviation could enable children to acquire skills and knowledge that would benefit their family and perhaps change their future and financial situation. However, for that result to occur, broadband and Internet access would have to be available to ideally all people in rural as well as urban areas. India is an interesting case in point for our study of AI’s impact on education of the poor as it is home to 430 million children between the ages of 0 and 18: The country has the largest population of children in the world (Kavishwar 2018). Almost 60% of students in rural areas lack basic reading skills; one survey found that nearly 50% of high school students cannot solve basic mathematics problems, and almost 50% drop out of rural schools by the age of 14 (Kavishwar 2018). AI could help by introducing interactive learning facilitated by digitized tools, such as smart-boards, LCD screens, and multimedia videos, to make the classroom interesting and engaging to students. Teachers could present material remotely in several locations by utilizing interactive digital platforms. These new tools and remote teaching opportunities could overcome the obstacles created by travel and transportation. It could assist to assuage the shortage of teachers in rural schools, as well as improve attendance in classes on a regular basis, thus reducing the rural school dropout rate. One Tamil Nadu-based woman, social entrepreneur K. Suriya Probha, decided to take on the mission of closing the AI education gap in rural India by teaching digital skills like coding and robotics to children (Bhatia 2019). She was inspired by Indian Prime Minister Narendra Modi’s Digital India Campaign and the Chinese AI guru Fei-Fei Li, who believes that AI education can be a great enabler for school-age children. Probha saw it as her moral and social responsibility to take AI education to the economically disadvantaged population. She is working on a program that will enable a teacher who uses AI as a teaching tool to respond to student questions with real-time answers and even identify the emotions and gestures of a child during the interaction (Bhatia 2019). In the case of healthcare, the government through innovative partnership with academia and industry has spearheaded the use of AI systems in education and could improve the quality of education in rural villages in India. COVID-19 has had a huge impact on the education system of India. Children across states, regions, caste, and gender have been affected (Impact of COVID-19 on School Education in India: What are the Budgetary Implications? 2020). Shutting down schools and shifting children to digital platforms has increased inequalities and pushed children out of the officially organized educational process due to the
108
M. A. Goralski and T. K. Tan
existing digital divide (Impact of COVID-19 on School Education in India: What are the Budgetary Implications? 2020). This will have a long-lasting effect on the children of India, not just in education but also in children’s healthcare and nutrition. The Indian government is exploring both short-term and long-term policy solutions to address some of the issues delineated by the pandemic (Impact of COVID-19 on School Education in India: What are the Budgetary Implications? 2020). In China, the Ministry of Education announced an experimental pilot program establishing a remote synchronous link between poor rural areas and Beijing Foreign Studies University (Weifeng 2019). Students in remote impoverished areas could gain learning experience remotely through shared online resources and access to more qualified and experienced teachers in urban centers. Over the past few years, the Chinese government has invested in AI-enabled teaching and the learning of students through platforms that have been significantly expanded. Tech companies, startups, and educational leaders have embraced the opportunity to overhaul the Chinese educational system. Currently, tens of millions of students in China use AI to learn through extracurricular tutoring programs, through digital learning platforms, or in their main classrooms (Hao 2019). It is the biggest experiment in AI-facilitated education in the world. Squirrel AI, which is at the forefront of the AI education revolution in China, uses master teachers – some the best in China – to develop school curriculum (Hao 2019). Education is disseminated via a laptop computer. The teacher monitors students via a real-time dashboard. One Hangzhou regional director states, “There are no sounds of teachers lecturing” (Hao 2019); hence, there is silence in the physical classroom in which the students receive their AI instruction. The outcome of the educational experiment is yet to be assessed, but it has piqued the interest of Silicon Valley, the Chan Zuckerberg Initiative, the Bill & Melinda Gates Foundation, and John Couch, Apple’s vice president of education. These innovations in educational development could circumvent some of the most endemic obstacles to education especially in countries with rural poverty – lack of enough teachers, lack of enough money to fund education, and lack of motivation for students to attend school. In China, it is believed that there are three things that have pushed AI education forward: tax breaks and other incentives for AI ventures to improve student learning, teacher training/school management, and the high level of academic competition (Hao 2019). Additionally, Chinese entrepreneurs are utilizing the massive amount of data that they gather from China’s immense population to train and refine their algorithms to heighten the educational experience of students, but also as a means of creating cutting-edge products for enhancing education. There may be a time lapse between those who acquire enhanced AI educational opportunities first and those at the end of the receiving line, but these new advances in AI education are currently the most innovative improvements in education for global rural communities in a long history characterized by lack of access to primary education and developmental exclusion from education for some students. Preliminary observations of Yang (2020) showed that no aspect of higher education in China was untouched after the COVID-caused disruption. China’s attention
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced…
109
focused on the effectiveness of e-learning and global student mobility (Yang 2020). Under the dictates of the Ministry of Education (MOE), brick and mortar schools were closed, and existing virtual learning platforms were enhanced in conjunction with seven of the largest EdTech companies (Ning and Corcoran 2020). These platforms allowed students to tap into streaming courses from their mobile phone or computer (Ning and Corcoran 2020). In remote areas of China, where access to bandwidth and computers was uneven, educators were prohibited from introducing new topics so that children without access to adequate technology would not become even more disadvantaged (Ning and Corcoran 2020). A further study found that the main impetus for all of these initiatives was the focus on epidemic prevention and control and the safety and health of teachers and students (Xue et al. 2021). The central government worked in concert with local governments to suspend classes according to the specific situations while moving education online in an orderly manner (Xue et al. 2021). The Department of Education strengthened the telecommunication networks and provided hardware, software, and technical support to ensure a smooth transition (Xue et al. 2021). Africa is behind the rest of the world when it comes to embracing AI, innovation, and machine learning in higher education (Fomunyam 2020). Some of the issues that AI could combat are overpopulated classrooms, heavy teaching loads, lack of research-experienced faculty, and embracing the information technology infrastructure (Fomunyam 2020). Better knowledge of digital technologies by African intellectuals could speed up the process of improved education in impoverished areas. It could make the process of learning dynamic and offer options like customized and personalized learning to students. Unfortunately, most African governments are not interested in the research activities of academia. Hence, since investment in research is not forthcoming, the path to new innovations that could have been gleaned from expanded research streams is not captured within the knowledge base of the university or the continent. The value placed on research may need to improve for African scholars to take advantage of available opportunities to solve the problems that plague the African continent and academia itself (Mafenya 2014). Fomunyam (2020) states that most African scholars have no interest in generating new knowledge and since most of the materials used for teaching in higher education are written in Western languages, absorbing new material is limited. The impact of COVID-19 on education in Africa is more similar to the disruptions in India than the systematic control of the situation in China. Research of Human Rights Watch found that school closures exacerbated inequalities that had already existed in Africa (Impact of Covid-19 on Children’s Education in Africa 2020). Children who had already been excluded from a quality education were most affected, while many children received no education across the continent after schools closed in March of 2020. Another study found that higher education populations were most affected due to closure of higher education institutions across the continent (Koninckx et al. 2021). Universities in Africa were not able to quickly move classes online; therefore, campuses were closed and teaching suspended (Koninckx et al. 2021).
110
M. A. Goralski and T. K. Tan
AI is fast becoming one of the most important tools to both urban and rural education in the twenty-first century. It has already made its way into the curriculum of many educational programs worldwide and it has begun to change the education of the world’s rural poor. In most instances the use of AI is very welcome and accepted in schools, because there are far too few teachers for the number of students who are at school age in developing countries such as China, India, and Africa. China has set AI and machine learning technologies as a strategy for the future of the country. It seeks to become the number one artificial intelligence hub worldwide by the year 2030 by combining the forces of government, industry, academia, and technology giants. India, while having the backing of the government, has not formulated strong ties with industry, academia, and its technology giants as is the case with China. Africa has not made artificial intelligence and machine learning essential in its educational system. It will ultimately pay the price for that decision in the future by forfeiting the advances in knowledge that could be gleaned through academic research and innovative teaching technologies.
6 Conclusion The problem of poverty is complex. Poverty has been studied through the ages, but it has never been alleviated. Unfortunately, the COVID-19 pandemic of 2020–2021 increased poverty due to economic downturns worldwide which will make it difficult for the world to meet SDG #1, no poverty, by 2030. Theories have been set forth by philosophers, historians have followed the path of poverty, and economists have tracked the various stages and thought patterns of societies on the topic of poverty, but poverty like many of the challenges and goals set forth in the Global Compact for sustainable development is threatening to thwart humanity. AI can approach the challenges of poverty through a new lens, perhaps a wider lens than humanity has in the past, and with fewer cultural and societal biases. AI is making small inroads in the fields of healthcare and education, delivering healthcare to people in rural areas of developing and developed countries through mobile telephones and AI-enabled equipment in packages small enough to be carried into areas not reachable in the past. AI can communicate with people one on one through an inexpensive mobile phone, take needed medical diagnostic equipment to rural areas, allow academics to interact with students remotely, and bring dynamic ideas and new innovations to additional students. One of the most substantial outcomes of this research is the realization that government, industry, academia, and society must work together in tandem to reach the sustainable development goals. No one entity can overcome the challenges on their own. All should work on a policy together regarding the global alleviation of poverty. Governments and industry need to be selfless and provide for the people within society. They need to create the safety nets for all people. Industry is usually ahead of government in innovation and technology. It can bring forth breakthroughs that will assist people in ways that government cannot. Academia can share the
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced…
111
knowledge of government and industry to create the leaders of the future with a sustainability mindset. Society needs to accept the new innovations that AI can offer, the better health options, the data that can be collected to combat the diseases that have raged throughout history, and the peace of mind that comes from knowing that a healthy child will be delivered into the world. Technology, especially AI, should be used for the good of disadvantaged people. The last of the SDGs mentioned at the beginning of this chapter is #10, reduced inequalities. The insight and future of AI, its unbiased approach to dissemination of knowledge, and its ability to go beyond humanity to reach the objectives set forth in the SDGs will reduce inequalities, but humanity must also play its part in bringing these innovative new technologies to all areas of the world and all people. Our observations from the examples in this chapter serve to highlight AI-enabled programs that are currently being implemented in some of the countries with the highest percentage of people living in poverty. The study will create a foundation for future research to define how the design and application of AI would affect world poverty, could alleviate poverty, and shape the future of the sustainable development goals. Aristotle created the concept of flourishing ethics; people today can aim at the flourishing of humanity. COVID-19 could be viewed as not just a major disruption in the progress of humanity, but also as a guide to fill the existing gaps in the infrastructure worldwide.
References Artificial Intelligence (AI). June 3, 2020. https://www.ibm.com/cloud/learn/ what-is-artificial-intelligence. Bhatia, Richa. 2019. This Social Entrepreneur Is Closing the AI Education Gap By Reaching Out to Rural India. analyticsindiamag.com. January 24. https://analyticsindiamag.com/ this-social-entrepreneur-is-closing-the-ai-education-gap-by-reaching-out-to-rural-india/. Boserup, Ester. 1970. The Conditions of Agricultural Growth – The Economics of Agrarian Change Under Population Pressure. London: Routledge. Brownell, Ginanne. 2020. Girls Have Greater Access to Education Than Ever. October 9. https:// foreignpolicy.com/2020/10/09/girls-women-education-equality-unesco-global-education- monitoring-report/. Chatzky, Andrew, and James McBride. 2020. China’s Massive Belt and Road Initiative. January 28. https://www.cfr.org/backgrounder/chinas-massive-belt-and-road-initiative. Dai, Sarah. 2018. A Look at How China Is Using Technology to Improve Rural Access to Quality Health Care. South China Morning Post. March 6. www.scmp.com/tech/article/2135880/ look-how-china-using-technology-improve-rural-access-quality-health-care. Fomunyam, Kehdinga George. 2020. Theorising Machine Learning as an Alternative Pathway for Higher Education in Africa. International Journal of Education and Practice 8: 268–277. Goralski, Margaret A., and Krystyna Górniak-Kocikowska. 2019. Education in the Era of Artificial Intelligence: The Will to Listen as a New Pedagogical Challenge. Ethos, 3(125), 152–198. Goralski, Margaret A., and Tay Keong Tan. 2020. Artificial Intelligence and Sustainable Development. The International Journal of Management Education 18: 1–13. Guo, Jonathan, and Bin Li. 2018. The Application of Medical Artificial Intelligence Technology in Rural Areas of Developing Countries. Health Equity 2: 174.
112
M. A. Goralski and T. K. Tan
Hao, Karen. 2019. China Has Started a Grand Experiment in AI Education. It Could Reshape How the World Learns. MIT Technology Review. August 2. https://www.technologyreview. com/2019/08/02/131198/china-squirrel-has-started-a-grand-experiment-in-ai-education-it- could-reshape-how-the/. Helping Half a Million Pregnant Women in Nigeria Get Better Antenatal Care with a Portable Ultrasound Device. May 22, 2019. https://www.gehealthcare.com/article/helping-half-a- million-pregnant-women-in-nigeria-get-better-antenatal-care-with-a-portable-ultrasound- device/?utm_source=twitter.com&utm_medium=GESocial&utm_content=vscan+access+nige ria&utm_campaign=WHA72. Ho, Andy. 2018. AI Can Solve China’s Doctor Shortage. Here’s How. The World Economic Forum. September 17. https://www.weforum.org/agenda/2018/09/ai-can-solve-china-s- doctor-shortage-here-s-how/#:~:text=China%20has%20only%201.8%20doctors%20per%20 1%2C000%20citizens. Impact of Covid-19 on Children’s Education in Africa. August 26, 2020. https://www.hrw.org/ news/2020/08/26/impact-covid-19-childrens-education-africa. Impact of COVID-19 on School Education in India: What are the Budgetary Implications? 2020. https://www.cbgaindia.org/policy-brief/ impact-covid-19-school-education-india-budgetary-implications/. Kavishwar,Ajay. 2018. Digital EducationAmong Students in RuralAreas. Forbes India.April 2. https:// www.forbesindia.com/blog/education/digital-education-among-students-in-rural-areas/. Kemp, Simon. 2020. Digital 2020 Malawi. February 18. https://datareportal.com/reports/ digital-2020-malawi. Kleinman, Zoe. 2020. What it Tencent? BBC News. https://www.bbc.com/news/ technology-53696743 Koninckx, Peter, Cunegonde Fatondji, and Joel Burgos. 2021. COVID-19 Impact on Higher Education in Africa. May 19. https://oecd-development-matters.org/2021/05/19/ covid-19-impact-on-higher-education-in-africa/. Kopparapu, Kavya, and Neevanth Kopparupu. 2020. What About AI and Health Excites You the Most? Med.MD.com. Lawrence, Stacy. 2016. GE Healthcare Hand-Held Ultrasound in Pilot NHS Test, $20M Nigerian Health Initiative. Fierce Biotech, May 17. Lung, Nicky. 2018. China launches national association to speed up integration of AI with healthcare. https://opengovasia.com/china-launches-national-association-tospeed-up-integration-of- ai-with-healthcare/ Mafenya, P.N. 2014. Challenges Faced by Higher Education Institutions in Research Skills Development: A South African Open and Distance Learning Case Study. Mediterranean Journal of Social Sciences, 5(4), 436–442. Makwero, Martha T. 2018. Delivery of Primary Health Care in Malawi. June 21. https://www. ncbi.nlm.nih.gov/pmc/articles/PMC6018651/. Malthus, Thomas Robert. 1803. An Essay on the Principle of Population. New Haven: Yale University Press. Marron, Orla, Gareth Thomas, Jordana L. Furdon, Dagmar Mayer Bailey, Paul O. Grossman, Frederic Lohr, Andy D. Gibson, et al. 2020. Factors Associated with Mobile Phone Ownership and Potential Use for Rabies Vaccination Campaigns in Southern Malawi. Infectious Diseases of Poverty June. McCarthy, John. 2004. What Is Artificial Intelligence? November 24. https://homes.di.unimi.it/ borghese/Teaching/AdvancedIntelligentSystems/Old/IntelligentSystems_2008_2009/Old/ IntelligentSystems_2005_2006/Documents/Symbolic/04_McCarthy_whatisai.pdf. Ning, Annie, and Betsy Corcoran. 2020. How China’s Schools Are Getting Through COVID-19. April 20. https://www.edsurge.com/ news/2020-04-20-how-china-s-schools-are-getting-through-covid-19. Nursing and Midwifery. January 9, 2020. https://www.who.int/news-room/fact-sheets/detail/ nursing-and-midwifery.
Artificial Intelligence: Poverty Alleviation, Healthcare, Education, and Reduced…
113
Olaronke, Iroju, and Ojerinde Oluwaseun. 2016. An Ontology Based Remote Patient Monitoring Framework for Nigerian Healthcare System. International Journal of Modern Education and Computer Science 8: 17–24. Oyaro, Kwamboka. 2016–2017. Taking Health Services to Remote Areas. Africa Renewal, December–March: 22–23. Physicians (per 1000 people) – Nigeria. 2018. https://data.worldbank.org/indicator/SH.MED. PHYS.ZS?locations=NG. Physicians (per 1000 People). 2020. https://data.worldbank.org/indicator/SH.MED.PHYS.ZS?en d=2018&start=1960&view=chart. Principles and Practice in Measuring Global Poverty. January 13, 2016. https://www.worldbank. org/en/news/feature/2016/01/13/principles-and-practice-in-measuring-global-poverty. Rao, Pavithra, and Dona Joseph. 2016–2017. Portable Ultrasound Device to Tackle Child Mortality. Africa Renewal, December–March: 38. Ravallion, Martin. 2016. The Economics of Poverty: History, Measurement, and Policy. Oxford: Oxford University Press. Rouhiainen, Lasse. 2019. How AI and Data Could Personalize Higher Education. October 14. https://hbr.org/2019/10/how-ai-and-data-could-personalize-higher-education. Rural Population – Nigeria. 2019. https://data.worldbank.org/indicator/SP.RUR. TOTL?locations=NG. Rural Population China. 2020. https://data.worldbank.org/indicator/SP.RUR.TOTL. ZS?locations=CN. Rural Population Malawi. 2020. https://data.worldbank.org/indicator/SP.RUR.TOTL.ZS. Tafesse, Wiktoria, and Martin Chalkley. 2021. Faith-Based Provision of Sexual and Reproductive Healthcare in Malawi. Social Science & Medicine August. Tafirenyika, Masimba. 2016–2017. It’s Time to Rethink Medical Insurance. Africa Renewal, December–March: 6–7. Transforming Our World: The 2030 Agenda for Sustainable Development. October 21, 2015. https://www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1&Lang=E. United Nations Sustainable Development Goals Report 2021. 2021. New York: United Nations. Wee, Sui-Lee. 2018. China’s Health Care Crisis: Lines Before Dawn, Violence and ‘No Trust’. September 30. https://www.nytimes.com/2018/09/03/business/china-health-care-doctors.html. Weifeng, Li. 2019. The Future Has Come: AI in Education and Poverty Alleviation. China.org.cn. January 5. http://www.china.org.cn/opinion/2019-01/08/content_74352493.htm. What Is Artificial Intelligence? n.d. https://aws.amazon.com/machine-learning/what-is-ai/. Xue, Eryong, Jian Li, Tingzhou Li, and Weiwei Shang. 2021. China’s Education Response to COVID-19: A Perspective of Policy Analysis. Educational Philosophy and Theory 53: 881–893. Yang, Rui. 2020. China’s High Education During COVID-19 Pandemic: Some Preliminary Observations. Higher Education Research Development 39: 1317–1321. Yaya, Sanni, Ghose Bishwajit, and Vaibhav Shah. 2016. Wealth, Education and Urban-Rural Inequality and Maternal Healthcare Service Usage in Malawi. BMJ Global Health, 1(2), 1–12. https://doi.org/10.1136/bmjgh-2016-000085
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI Applications Kostina Prifti
Abstract This contribution aims at providing a more concrete and accurate understanding of Doughnut economics, its model, and its ideas. In doing so, it provides a comprehensive description of the Doughnut and its connection with the Sustainable Development Goals. Then, it inquires into the philosophical background of Doughnut economics, elucidating its existential rationale that relies on human dignity. Further, examples of four AI applications are used to showcase how the Doughnut model would address their use and challenges that arise thereof. From this testing exercise transpires the understanding that another limitation is required in the Doughnut model, pursuant to its philosophical background. Therefore, besides economic activities that may breach the ecological ceiling or the social foundation, activities that infringe human dignity, without breaching any of the boundaries, are also incompatible with the Doughnut model. This complementing proposal is conceptually represented within the model of Doughnut economics. Keywords Sustainable development · Doughnut economics · Dignity · Ethics · Artificial intelligence
1 Introduction The nineteenth century saw the emergence of sustainable development policy from a union of economics with environmental sustainability. This led to slow but steady initiatives that aimed at incorporating sustainable criteria to economic development (Spindler 2013). Rooted in all cultures (Schreiber 2004), sustainable development made its way to policy firstly through the German Forestry Industry (Schulze and Schretzmann 2006, 68) and then through the United Nations’ (UN) Environmental K. Prifti (*) Erasmus School of Law, Erasmus University Rotterdam; Jean Monet Centre of Excellence on Digital Governance, Rotterdam, The Netherlands e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_7
115
116
K. Prifti
Policy, which has produced today’s Sustainable Development Goals (SDGs). A lake in South America named Manchau gagog changau gagog chaugo gagog amaug, which means “We fish on our side, you fish on your side and nobody fishes in the middle”, perhaps succinctly evidences the old origins of sustainable development as a concept. Doughnut economics is a recent idea that aims at providing a model for sustainable development. The Doughnut model can be perceived as a conceptual representation of a seemingly straightforward idea: the outcome of our activities must be subject to two constraints, ensuring a social foundation of human wellbeing and protecting the ecological ceiling of planetary boundaries (Raworth 2017b). So long as our activities do not fall short of the social foundation or over the ecological ceiling, the model suggests that we are operating within the safe and just space for humanity. These two constraints are drawn based on prior research and widely accepted social objectives. The threshold of the social foundation is comprised of minimum needs that any society must meet for all humans. The needs included in the social foundation are visible in Fig. 1 and are drawn from the SDGs as developed in 2015 by the UN (UNDP 2022). The ecological ceiling is drawn based on the research that identifies the – originally 9 (Rockström et al. 2009) and then 12 (Steffen et al. 2015) – planetary boundaries, the crossing of which is expected to lead to irreparable damage on the planetary scale. As shown in Fig. 1, there are 12 planetary boundaries that jointly form the ecological ceiling. The Doughnut Economics Action Lab (DEAL) is where the ideas and model of the Doughnut are further explored and operationalised. Cities like Amsterdam have taken proactive steps towards the application of Doughnut economics (Amsterdam 2022). However, despite the steps taken towards operationalisation and specification of how the model would work in practice, the Doughnut and its ideas bear a metaphysical nature, insofar as they are too broad and hermeneutic to qualify or (to use a Popperian term) be demarcated as a scientific theory. The ideas behind Doughnut economics are framed in opposition to the prevailing neoclassic account of economics based on the homo economicus and mechanical equilibrium, offering a claim to paradigm-shifting concepts like distributive-by-design and regenerative- by-design. However, these ideas are not empirically analytical and often raise more questions than they answer (Schokkaert 2019). It is, therefore, necessary to further elucidate the meaning of Doughnut economics and its model, particularly its philosophical background. This elucidation is not only useful in and of itself, but especially in order to enable further empirical analysis and falsification. If one inquires into its philosophical background, Doughnut economics refer to SDGs and human dignity as its existential and justificatory rationale. SDGs, in turn, also refer to human dignity as a basis for their development (May and Daly 2020). However, the concept of human dignity takes different meanings throughout the history of philosophy (Lebech 2009), so the reference to human dignity by Doughnut economics and SDGs begs the question: what does human dignity mean in this context? Hence, in order to elucidate the philosophical background of Doughnut economics and SDGs, it is necessary to elucidate and
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI…
117
Fig. 1 Doughnut economics illustration. (Raworth 2017b)
operationalise the meaning of human dignity for purposes useful to Doughnut economics. This is one of the aims of this contribution. Moreover, in line with the pragmatist maxim that concepts are properly understood when tested (Peirce and Eisele 1985, 266), the model of Doughnut economics is tested through four examples of AI applications: AI applications that may violate the social foundation, AI applications that may violate the ecological ceiling, AI applications that support one threshold but violate the other, and AI applications that support both thresholds but may violate human dignity. Accordingly, the analysis shows that a third constraint is required within the Doughnut model, pertaining human dignity. Section 2 describes Doughnut economics, its ideas, and its model in more detail, explicating its connection with the SDGs. In Sect. 3 the chapter explores various conceptualisations of human dignity throughout different philosophical eras, clarifying which “version” of human dignity fits the requirements of Doughnut
118
K. Prifti
economics and SDGs. Section 4 offers an analysis of the Doughnut model through four examples of AI applications, whereas Sect. 5 concludes.
2 The Doughnut Model If a society manages to not fall under the social foundation or over the ecological boundaries, it is operating under a safe and just space for humanity – so does the Doughnut profess. Figure 1, in the introductory section, gives a picture of the Doughnut model. While a picture generally speaks for a thousand words, in this case it speaks precisely of seven ways to think like a twenty-first-century economist. In what follows, the ideas and the model of the Doughnut are presented descriptively. Then, the relevance of the Doughnut model for the SDGs and their operationalisation is discussed. This section traces these seven ways as a structured method to describe the Doughnut model and its ideas. The reader will notice that the essence of each seven ways is a critique to neoclassic economics, which this section is bound to follow descriptively. (i) Instead of the GDP: The first shift in thinking like a twenty-first-century economist is to question the use of GDP as a measure of economic health. Instead, progress ought to be measured by whether we are operating inside the Doughnut, i.e., if the social foundation and the ecological boundaries are respected. In this sense, the safe and just space of humanity, the space between the two concentric circles, is the measure of success for the economy. (ii) Instead of (only) the market: Economics is typically concerned with the role of the market and its close allies: business, finance, and trade. However, the Doughnut suggests that there are other relevant, often neglected, actors, such as the state, the household, society, the commons, the environment, etc. The example of a mother caring for a child, a type of caring work that is unaccounted for by the market, shows that not all economic relations are handled within the market. That is why the Doughnut calls for the inclusion of other actors and for an “embedded economy”. (iii) Instead of the homo economicus: The economic man, having complete rationality, perfect information, fixed preferences, and being guided by narrow self-interest, is the abstracted image of humanity that guides today’s prevailing economic models. However, many limitations and critiques exist for this abstracted image, especially in the field of behavioural economics (Simon 1986). The Doughnut suggests that homo economicus must reflect the nature of humans, which is social, interdependent (Veblen 1898), approximating, fluid in values, and dependent upon the living world (Gigerenzer 2010). (iv) Instead of economy-as-machine: Most of the economic models used today are based on a mechanical equilibrium, the most prominent example of which is the supply and demand diagram. The mechanical equilibrium is a simplification of the many variables that exist in reality. Simplification is necessary, lest
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI…
119
one be disabled from making any predictions. On the other hand, if one simplifies “too much”, thus removing uncertainties, one risks having erroneous predictions. This worry is not novel in the Doughnut; in fact it is explicated by many economists. The Doughnut suggests that the insufficiency, or inadequacy, of using models based on mechanical equilibriums ought to be replaced, through a shift in thinking, by focusing on systems and their complex dynamics. Thinking in reinforcing and balancing feedback loops, the Doughnut calls for an “economy-as-organism”, instead of an “economy-as-machine”. (v) Instead of poverty-as-feature: Pareto’s claim that redistribution is counterproductive and that the worse off can be helped only by expanding the economy, along with Kuznets’ U-shaped curve, which claims that rising inequality is inevitable for economic success, have been the guiding principles of economics, especially for development economists. The Doughnut firstly highlights that these claims are refuted by economic analysis, which have shown that inequality undercuts, rather than boosts, GDP growth (International Monetary Fund 2014). Further, the Doughnut suggests that instead of expecting economic growth to reduce inequality, we ought to create an economy that is distributive by design, structuring the economy as a distributed network. (vi) Instead of growth-as-cleaner: An inverted U-shaped curve between pollution and GDP represents the discovered pattern that in the beginning pollution rises, then falls, while GDP increases. This pattern formulates the hypothesis that growth will clean after itself (Grossman and Krueger 1995). This hypothesis, supported by data on water and air pollution but not on biodiversity and wider ecological impact, has opened the way for macroeconomic models that are typically degenerative (the produced material becomes waste after consumption). The Doughnut counters this approach by promoting a paradigmatic shift towards an economy of regenerative design. To describe in a few words, an economy based on regenerative design is cyclical, minimising lost matter and heat, and focusing on renewable materials. (vii) Instead of the addiction to growth: In order to fulfil human needs and end deprivation, poverty, and hunger, the economy must grow. This is important in order to realise that the Doughnut does not object to growth and its benefits. On the other hand, it highlights that growth alone cannot solve our problems, especially those ecological ones. Growth is neither intrinsically good nor intrinsically bad – that is why we ought to be agnostic about it. By agnostic, the Doughnut means an economy that measures its success based on human prosperity, regardless of whether GDP is increasing. Operating within the Doughnut requires a conceptual shift, in accordance with these seven ways. The reasoning and justification behind these ideas is sound; however, many times they raise more questions than they answer. For instance, how does one measure human prosperity (Schokkaert 2019)? Many of the concepts comprising Doughnut economics, like embedded economy, regenerative design, distributed networks, and economy-as-organism, bear a metaphysical nature because they are too
120
K. Prifti
broad, sometimes undefined, and (only) hermeneutically refutable. This is a fundamental shortcoming that Doughnut economics must overcome. It is relevant, in such regard, to understand if the connection between Doughnut economics and SDGs yields any fruits for Doughnut economics. This relationship is comprised of at least two dimensions. Firstly, the social foundation of the Doughnut, which includes the human needs (food, health, education, income and work, water and sanitation, energy, network, housing, gender equality, social equity, political voice, and peace and justice), is drafted based on the work of the United Nations with the SDGs, as Raworth (2017a) also stresses. In fact, SDGs 1–10 and 16 correspond to the elements of the social foundation that the Doughnut promotes. Besides the SDGs that fit within the social foundation, SDGs 11–15 fit with the ecological ceiling of planetary boundaries, whereas SDG 17, partnerships for the goals, can be placed as an intrinsic part of the Doughnut itself. Secondly, the Doughnut can be perceived as a conceptual representation of the aims behind the SDGs, balancing social, economic, and environmental sustainability. In summation, the connection between Doughnut economics and SDGs is strong, comprised of the fact that SDGs fill the semantics of some concepts within the Doughnut and of the fact that the Doughnut offers a conceptual frame and claim to operationalisation for the SDGs. However, we know that SDGs, too, face conceptual and structural challenges (May and Daly 2020), similar to those of the Doughnut. Such an understanding leads to two conclusions. Firstly, SDGs do not serve any elucidating role for the shortcomings of Doughnut economics. Secondly, the conceptual elucidation, which this chapter aims to perform for Doughnut economics, serves also to clarify the philosophical background of SDGs, since they also rely on human dignity for their existential rationale. So far, this section describes the ideas behind the Doughnut, tracing the required conceptual shift that a twenty-first-century economist should adopt. These ideas are presented as a critique to some elements of neoclassic economics, although they are shown to bear a metaphysical nature. Lastly, the connection between the Doughnut and the SDGs is accounted for. The next section questions the philosophical rationale of the Doughnut, in an attempt to offer a scientific explanation for its existential rationale.
3 Dignity, But Which One The aim of this section is to understand the philosophical background of Doughnut economics. The theory behind the Doughnut often refers to human dignity as its existential rationale, but this reference begs the question: what does human dignity mean? This section shows that there are a few answers to this question, but only one satisfies the conditions and ideas of the Doughnut model. That is the patient- oriented, ontocentric conceptualisation of human dignity that information ethics offers.
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI…
121
3.1 Traditional Conceptualisations of Human Dignity The semantics of human dignity have been subject to change, pursuant to various historical and philosophical eras. We have had different ideas about our value as human beings (Lebech 2009). In antiquity, the concept of human dignity was used to explain the superiority of humans in comparison to the animal world, based on human abilities. Humans have dignity because, unlike animals, they have the ability to be virtuous (Crisp 2014, 23–37), or because, unlike animals and like gods, they have the ability to reason and manage their impulses (Cicero and Laser 2014). Being justified by a superiority of some sort, virtue or reason, dignity was not intrinsic to all humans equally, but only to those deserving it. Aristotle did not consider all humans to have dignity (e.g. slaves and women) and Cicero believed that some ranking of dignity and respect should exist, where the more superior ones have also “more dignity” (Cicero and Laser 2014). Another influential approach that justifies and fills the semantics of human dignity is the religious one. Typically for monotheistic religions, humans have dignity because they are created by God. The religious account seemingly opposes the previous version of antiquity in terms of differences between humans because, since we are all created by God, humans are equal and deserve the same amount of dignity, provided they are theists. However, it still relies on a superiority claim, effectively because humans are the vicegerent of God on earth. This is evident in Islamic teaching (Quran; Mozaffari 2011) and in Christian theology (Aquinas 1486). In the Enlightenment Age, the basis for human dignity was reason. Some scholars refer to this as the logo-centric approach (Lebech 2004), precisely for the importance of human rationality as a justification for the intrinsic value of humans. The focus on rationality is characteristic of the Enlightenment Age and it resonates with Kantian philosophy and deontology ethics, despite Kant’s valuable critique on the limits of human reasoning. Kant is often cited claiming that humanity itself is dignity (Kant and Klenner 1988, 38; Lebech 2004), which he bases on the justification that the ability of humans to reason and self-legislate moral laws through their autonomy is what dignifies the nature of being human. In this sense, human dignity is based on autonomy, which in turn is based on rationality. As such, the superiority claim persists, since dignity is perceived as logically subsequent to rationality, an ability that distinguishes humans from other beings. The modern conceptualisation of human dignity that was developed in the Enlightenment Age was challenged in post-modern philosophy, according to which human dignity served an enabling purpose for a democratic society (Lebech 2009). As such, human dignity adopts a relational, or functional, nature. Based on dialectical reasoning and opposing the objectively true point of view, post-modernism values human dignity as a function of social relations, which in turn enable the functioning of a democratic society (Lebech 2009). It must be noted that the difference between modernism and post-modernism in conceptualising human dignity is highly disputed (Habermas and Ben-Habib 1981) since the post-modernist account is based also on rationality, albeit focused on dialectic reason.
122
K. Prifti
These accounts of the semantics of human dignity can be understood as the traditional approaches. They have their differences, but they agree with each other in that humans have dignity because they are superior in a certain way – compared to animals, birds, rivers, and robots. There is a shift in philosophical and ethical thinking that challenges the traditional conceptualisation and which has an impact in how we understand human dignity. This is explained in the next sub-section.
3.2 A Shift in Ethics: From Agents to Patients The traditional conceptualisations of human dignity rely on a kind of human special ability, either due to virtue, likeness to God, or reason. Traces of this understanding can be found in so-called traditional macro ethics, such as virtue ethics, deontology, and consequentialism. Essentially, if one questions the morality of an action according to these ethical frameworks, one must ask if the agent took the morally right action. That means traditional macro ethics have an agent-oriented approach, which fits with the virtuous, rational, God-like conceptualisations of the human agent. This approach is challenged by (relatively) recent developments in ethics. Bioethics (Beauchamp and Childress 2019), feminist, and care ethics (Tronto 1993), among others, have shifted the focus of ethical judgement from the agent to the patient – the receiver of the action. In essence, instead of asking what the morally right thing that the agent should do is, these patient-oriented approaches to ethics ask what the morally right action is, for the patient to receive. While the actions of the agent are still relevant, the focus is on the wellbeing of the patient. Therefore, these approaches challenge the superiority claim based on special abilities, found in traditional macro ethics, since, here, humans are perceived in need of care, rather than armoured with some divine or natural ability that justifies their dignity – fragile as a plant, not precious as a jewel, as Nussbaum (2001) would say. Notwithstanding the change of focus from agent to patient, the shift towards an anthropo-eccentric conceptualisation of human dignity is not yet complete, since even in bioethics, feminist ethics, and care ethics, humans as living things are still in the centre of the ethical universe. Information ethics (Floridi 2013) joins these patient-oriented approaches, offering some novelties. Aligned with bioethics, feminist ethics, and care ethics, the orientation of information ethics is not focused on the agent, but on the patient. However, information ethics further challenges the biocentrism of morality with an ontocentric version. The infosphere, comprised of resources, targets, and products of information, is ontologically informational, making information the centre of moral claims. As a result, any informational entity (e.g., a tree) has a moral claim to fulfil the purpose of its existence, albeit overridable. In this ontocentric account, humans are but one of the informational entities and agents that impact the infosphere. Dignity is thus perceived similarly as for other informational entities: a prerequisite that enables humans as informational entities to flourish, to improve and enrich their existence. Along with humans, also rivers, trees, animals, birds, and
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI…
123
robots have a claim to fulfil the purpose of their existence and have, as such, dignity (again, overridable). This new account presented by information ethics offers a truly patient-oriented and anthropo-eccentric approach to ethics and dignity (Floridi 2013). The next part explains why this is the conceptualisation of human dignity that fits with the ideas behind the Doughnut.
3.3 The Doughnut’s Allegiance The Doughnut’s ideas cannot be based on traditional macro ethics because the humans of the Doughnut model are not perceived as having special abilities that make them worthy of having dignity. They are not presented as virtuous, God-like, rational, or in any way supreme. They are instead presented as agents that must have their needs fulfilled and for whom the economy must care. This perception of humanity draws the Doughnut away from traditional conceptualisations of the virtuous, God-like, or rational human agent, who has dignity because she is special, due to her abilities. As a result, the Doughnut is aligned with a patient-oriented approach in ethics. In Raworth (2017b, 61) there are four ethical principles that a twenty-first-century economist must consider: (i) act in service to human prosperity, (ii) respect autonomy, (iii) be prudential in policymaking in order to minimise harm, and (iv) work with humility. These principles resemble the four well-known principles of bioethics, respectively, beneficence, non-maleficence, autonomy, and justice (Beauchamp and Childress 2019). In fact, the first three principles are almost identical; hence, they are substantially patient-oriented approaches. The fourth principle, working with humility, comes closer to an agent-oriented approach, aligned with virtue ethics or deontology ethics. Nonetheless, the agent is portrayed as fragile, not as virtuous or rational, since working with humility relies on accepting and explicating our limitations as humans. This analysis brings the ethical background of the Doughnut closer to the patient- oriented approaches of bioethics and feminist care ethics. There is, however, a substantial misalignment in the fact that neither of these ethical frameworks is able to place the environment as a patient, because they are morally anthropocentric or biocentric, despite being patient-oriented. In simpler words, the receiver of the moral action, the patient, is always “a living thing” according to bioethics, feminist ethics, and care ethics. Information ethics, as a kind of environmental ethics, offers an ethical shelter for Doughnut economics, considering the above. Since information ethics perceives a universe (infosphere) that is ontologically informational, the receiver of the moral action is information itself. As a result, all informational entities, humans, trees, rivers, and robots are included as potential patients. Such a conceptualisation enables the ethical claims, which the Doughnut advances through economic concepts, that aim at protecting both humans and the environment. Clarifying the philosophical and ethical background of the Doughnut addresses the metaphysical nature of the ideas behind Doughnut economics, making the
124
K. Prifti
concept more accurate and facilitating its analysis and operationalisation. The following section offers an analytical perspective to the model offered by Doughnut economics, through the use of examples of AI applications.
4 The AI and the Doughnut The purpose of this section is to understand, by way of examples, how the Doughnut, its ideas, concepts, and model would approach and deal with particular activities. Examples of AI are used because of their relevance and threat to both foundations of the Doughnut model. So far, a methodological challenge arises, particularly due to the fact that the Doughnut does not offer concrete models that can be empirically tested, but rather suggests a few ways that facilitate a paradigmatic conceptual shift in thinking about economics. The metaphysical nature of the Doughnut constitutes intrinsic conditions in the type of analysis one can use to test it. Hence, this section is based on hermeneutical analysis. However, the purpose of the previous section was to construct a more accurate and testable conceptualisation of the ideas behind the Doughnut, which in turn offers this analysis a claim to accuracy. A second methodological challenge relates to the fact that the Doughnut, as it is constructed and presented, is not meant to be used for determining the validity of individual economic activities, but rather of the economy itself. Any attempts to determine how individual economic activities would interact with the Doughnut are bound to an interpretative approach.
4.1 Threatening the Boundaries The essence of the Doughnut is the two concentric rings, which represent two boundaries: the social foundation and ecological ceiling. Therefore, an activity that threatens even one of the elements that comprise these boundaries is deemed unethical, according to the Doughnut. Let us take two examples to showcase this understanding. One fundamental normative problem of AI derives from the bias inherent in the dataset with which the algorithm is trained to learn (Morley et al. 2020). This problem may be represented through the example of AI applications that predict the length of stay for each patient in the hospital. Aiming for efficiency as a goal, hospitals would benefit from knowing which patients are likely to have a shorter stay, thereby prioritising their care in order to free hospital spaces for new patients (Abd- Elrazek et al. 2021). In order to learn and make such predictions, the AI application is given medical data of a large number of patients. Through supervised learning techniques, the AI would trace the length of stay of patients with other correlated data in their files and therefore “learn” that, for instance, people aged 18–24 have shorter lengths of stay for acute diseases (Abd-Elrazek et al. 2021). In this case,
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI…
125
length of stay is correlated with age. However, data may show various correlations, some of which manifest their inherent discriminatory bias. When such AI applications were experimented in the University of Chicago academic hospital system, the AI application “learned” from the dataset that people from certain postal codes were likely to have shorter stays (Nordling 2019). Those postal codes transpired to belong to areas populated primarily by white upper-class people. The implication of this bias for healthcare is that people would get prioritised care depending on where they live or to which racial or ethnic group they belong (Garattini et al. 2017). Such a result, from the use of AI applications aimed at efficiency, would threaten the social foundation boundary, since it conflicts with at least one of the elements that comprise it, namely, ensuring healthy lives and wellbeing for all. It is important to point out that this conclusion does not imply that the AI application is incompatible in and of itself; efficiency is a worthy pursuit, just like bias in data can be useful (Gigerenzer and Brighton 2009). However, this AI application, operating based on this bias, would be incompatible with the Doughnut. With regard to the ecological ceiling, machine-learning AI applications may pose a serious threat. The computing power required by machine learning has increased 300,000-fold from 2012 to 2018. Seemingly simple AI applications may consume approximately 3 gigawatt-hours of electricity for their learning process, the same amount of energy needed to fuel three nuclear power plants for 1 h (Knight 2021). For this example, the case of Bitcoin, a digital currency, proves useful. Bitcoin is the world’s largest cryptocurrency, utilising a proof of work (PoW) algorithm and relying on blockchain as a database technology. Digital and decentralised, Bitcoin is used primarily for its novelty of providing transparency and trust among its users, due to its verifiable system. However, it is this capability that makes Bitcoin consume 0.55% of the electricity of the planet, matching the electrical consumption of Poland, the carbon footprint of Oman, and electronic waste of the Netherlands (Digiconomist 2022). Moreover, the energy consumed comes primarily from non-renewable and polluting resources, such as fossil fuels. Therefore, the operations of Bitcoin pose a threat to the ecological ceiling that the Doughnut aims to protect. Such an understanding does not imply that technologies like Bitcoin would be banned under the model of the Doughnut, but that, considering the threat towards the ecological ceiling, it would be necessary to address the unsustainability of the system. Moreover, some type of economic activities may support one boundary but threaten the other. Such is the case of smart grids – an AI technology that offers a promise towards protecting the ecological boundary but presents a threat to the social foundation. Smart grids are an AI solution that aims at efficiency, particularly of the energy and water grids. Their main capability is to integrate the behaviour and actions of all the users connected to it, through data-driven and other grid-related technical solutions. The smart grid’s promise to make the grid more efficient is based on lower consumption of energy; their capability to integrate users with new requirements offers the possibility to include distributed energy sources, like renewable energy sources, as well as provide stronger control over these sources. Moreover, by involving consumers in the energy market and improving the market
126
K. Prifti
functioning in general, they offer incentives for consumers to produce and trade energy from renewable sources (European Commission 2011). As such, smart grids offer a substantial promise to the protection of the ecological ceiling. Less consumption, higher use of renewable resources, and less wasted energy contribute to the preservation of the planetary boundaries, especially combatting climate change. However, reports and studies have raised concerns over the impact that the implementation of smart grids would have on vulnerable consumers (Sovacool et al. 2019). Vulnerable consumers may have more difficulty becoming price-sensitive or engaging with the market, either because they may not possess the knowledge or the time or because of the stress and anxiety created by the quantity of information that smart grid technologies generate. Another concern for vulnerable consumers is the necessity to update their electrical appliances so they can be integrated within the smart grid. While the EU and member states are expected to bear the costs for the implementation of smart grids, consumers must bear their own costs to update their electrical appliances in order to support smart grids (Milchram et al. 2018). A heavier burden is therefore placed on vulnerable consumers, triggering a threat to the social foundation and the fulfilment of human needs thereof. As a result, smart grids pose a question to the Doughnut, insofar as they offer a promise to protect the ecological ceiling and a threat to breach the social foundation. The Doughnut would have to provide an answer. The safe and just space for humanity is comprised of economic activities that simultaneously do not threaten the social foundation or the ecological ceiling. In other words, economic activities that threaten one of the foundations would already step outside this safe and just space. It follows that, according to the Doughnut, smart grid technologies may be implemented in support of the ecological ceiling only if they do not infringe the social foundation. So far, the Doughnut would, for instance, impose that measures must be put in place to ensure that vulnerable consumers do not share a heavier burden as a result of the implementation of the technology.
4.2 The Missing Circles Having explored how three AI applications would interact with the Doughnut, this part focuses on a fourth and final example: social credit systems (SCSs). SCSs are AI applications that rely on big data, used to rate citizen trustworthiness, among other objectives. The predecessors of SCSs are credit scoring, used geographically widely but limited only to financial use and regulated by law. An SCS goes beyond financial matters and offers the possibility to rate social aspects of business entities and individuals. A concrete case of SCSs can be traced to the People’s Republic of China (State Council 2014), where a planning outline aims at assessing the trustworthiness of individuals with respect to legal, social, and ethical standards (Chen and Cheung 2017). Summarised in a few words, the SCS would collect data about how individuals act and rate their behaviour according to the desired standard. Rewards for complying with the standard might involve fast-track promotions,
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI…
127
whereas individuals that fall under the designated standard may be denied certain perks or even rights. Fuelled by big data, SCSs may become an efficient tool for extended control from the government to its citizens. Big data sources may be administrative, transactional, sensor, tracking, behavioural, and opinion data (Chen and Cheung 2017). In the draft regulation published in April 2021, the European Commission proposes an outright ban to SCSs in the European Union, which indicates the potential for harm that this technology bears. How does the SCS fare within the Doughnut? The first test is to understand if the SCS would breach either of the boundaries that comprise the Doughnut. If we firstly consider the ecological ceiling, comprised of 12 planetary boundaries, the SCS presents an opportunity to safeguard the ceiling if such objectives are included in the rating criteria of the system. For instance, citizens may be rated depending on how well they care for the environment, how much waste they recycle, or how much plastic they use. Businesses may be rated depending on how much carbon dioxide they emit, or if they use regenerative practices. As such, the SCS would be operating safely without breaching, and perhaps also supporting, the ecological ceiling. If we consider the social foundation, the SCS presents another opportunity to advance the social goals thereof. The rating of the SCS may depend on how well individuals respect gender equality in their life (SDG 5) or if they share resources, like food or energy, with the poor (SDGs 1 and 2). The SCS rating might depend on how good the individual is behaving as a landlord (SDG 11), how they address education in their family and community (SDG 4), and so on. The goals for peace and justice promote strong institutions and combatting corruption (SDG 11), goals that may be supported, perhaps even promoted, by SCSs. By complying with the two limitations of the Doughnut, SCSs would thus be operating under the safe and just space for humanity. At the same time, the SCS may function so that neither of the other elements of the social foundation are breached. Clearly, certain uses of SCSs may breach these standards, for example, if a low rating means losing access to healthcare or being denied a job. However, an SCS can also operate without denying basic rights to citizens, specifically those laid down in the social foundation. As a result, it seems the SCS would not breach, and perhaps also support, the social foundation along with the ecological ceiling. This understanding implies that the operation of an SCS would fall within the safe and just space of the Doughnut. However, this conclusion is not supported by the philosophical background of the Doughnut. The SCS operates on the ability to collect and aggregate personal information of individuals, which is then used to rate their social credit score. These sources do not include only publicly known personal information, but also private personal information, like shopping activity and daily habits. As a result, individuals would have the impression that they are always under the surveillance of Lacan’s Big Other, the Orwellian Big Brother, or Bentham’s Panopticon applied in large scale. Such a feeling or impression has considerable effects on the individual’s right to form their own personality (van der Sloot 2015) and pursue their right to flourish and fulfil the purpose of their existence (Floridi 2013), since the individual is conditioned by externally mandated interferences. As a result, such a use of SCSs would be unethical and would breach the concept of human dignity that ethics of
128
K. Prifti
information advances and upon which the ideas of the Doughnut rely. Therefore, the conceptual model of the Doughnut offers a considerable shortcoming. As this example shows, there can be economic activities that abide by both boundaries that form the Doughnut, yet still violate human dignity. This shortcoming relates to a broader discussion on the positive and negative dimension of protecting human dignity (Whitman 2004). The continental European tradition, unlike that of common law, influenced by German and French legal traditions, adopts a constitutional perspective of human dignity being comprised of both positive and negative liberties. A positive liberty is the right to have a need fulfilled, e.g. the right to education, the right to food, the right to energy, and more. The social foundation of the Doughnut is comprised of such positive liberties, conceptualised as needs that all humans must be afforded. However, there is another dimension of human dignity, that of being free from external obstacles (Berlin 1969). A prominent case of negative liberty is the right to privacy, conceptualised as the right to form one’s own personality (van der Sloot 2015), free from external obstacles. The importance of this dimension is clear, yet missing from the conceptual model of the Doughnut. It would be necessary, pursuant to the Doughnut’s own philosophical background, to remedy this shortcoming. One option would be to modify the elements of the social foundation, by including negative liberties. However, preserving the positive nature of the elements comprising the social foundation, another alternative would be to introduce this addition within the safe and just space for humanity. Accordingly, the safe and just space for humanity would slightly shrink from the original conceptualisation, so besides economic activities not shooting above the ecological ceiling or below the social foundation, safe and just economic activities must also steer away from some new small circles within the safe and just space. The result would be a complete conceptualisation of how our economic activities protect human dignity, and a completed dignitarian approach to Doughnut economics.
5 Conclusions This contribution aims at providing a more concrete and accurate understanding of Doughnut economics and its model and ideas. In doing so, it provides a comprehensive description of the Doughnut and its connection with the SDGs. Then, it inquires into the philosophical background of Doughnut economics, questioning its existential rationale that relies on human dignity. Further, abiding by the principle that a concept is understood properly only when tested, examples of four AI applications are used to showcase how the Doughnut model would address their use and challenges. Doughnut economics is conceptually represented by two concentric circles, each standing as the boundary for the social foundation and the ecological ceiling; the space between the circles is the safe and just space for humanity, according to the Doughnut. Living in this space implies a paradigmatic shift in thinking like a
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI…
129
twenty-first-century economist, including shifting from mechanical equilibrium to systems thinking and being agnostic about growth by not using GDP as a measure of economic success. The chapter showed that the Doughnut relates to SDGs in two ways. Firstly, the SDGs fill the semantics of the concepts comprising Doughnut economics; secondly, Doughnut economics offers a conceptual frame and claim to operationalisation of SDGs. The Doughnut claims that the fundamental reason for its existence is dignity, which the chapter questions in relation to the various conceptualisations of dignity. Tracing the semantic evolution of this concept since antiquity, the chapter shows that the Doughnut fits with the concept of dignity advanced by information ethics, which is the anthropo-eccentric, patient-oriented, and ontocentric concept that perceives dignity as a prerequisite for flourishing and enriching the existence of any informational entity. Equipped with this new conceptual frame, the Doughnut model and its ideas are tested through four examples of AI applications: healthcare AI operating on unfair bias as a threat to the social foundation, Bitcoin energy expenditure threatening the ecological ceiling, smart grids offering to aid the protection of the ecological ceiling but threatening the social foundation, and SCSs which may abide by both boundaries yet threaten to infringe the concept of dignity as described above. From this testing exercise transpires the understanding that another limitation is required in the Doughnut model, pursuant to its philosophical background. Hence, besides economic activities that may breach the ecological ceiling or the social foundation, activities that infringe human dignity should be incompatible with the Doughnut model. Pursuant to the playfully serious nature of the Doughnut, these limitations may be perceived as chocolate chip additions to a more nuanced Doughnut model.
References Abd-Elrazek, Merhan A., Ahmed A. Eltahawi, Mohamed H. Abd Elaziz, and Mohamed N. Abd- Elwhab. 2021. Predicting Length of Stay in Hospitals Intensive Care Unit Using General Admission Features. Ain Shams Engineering Journal 12 (4): 3691–3702. https://doi. org/10.1016/j.asej.2021.02.018. Amsterdam, Gemeente. 2022. Policy: Circular Economy. English Site. https://www.amsterdam.nl/ en/policy/sustainability/circular-economy/. Accessed 31 Jan 2022. Aquinas, Thomas. 1486. Summa Theologica. Venice: Bernardinus Stagninus, de Tridino. Beauchamp, Tom L., and James F. Childress. 2019. Principles of Biomedical Ethics. New York: Oxford University Press. Berlin, Isaiah. 1969. Four Essays on Liberty. London: Oxford University Press. Chen, Yongxi, and Anne S.Y. Cheung. 2017. The Transparent Self Under Big Data Profiling: Privacy and Chinese Legislation on the Social Credit System. Journal of Comparative Law 12: 356. Cicero, Marcus Tullius, and Günter Laser. 2014. De Re Publica. Stuttgart: Reclam. Crisp, Roger. 2014. Aristotle: Nicomachean Ethics. Cambridge: Cambridge University Press. Digiconomist. 2022. Bitcoin Energy Consumption Index. Digiconomist. https://digiconomist.net/ bitcoin-energy-consumption/. Accessed 31 Jan 2022.
130
K. Prifti
European Commission. 2011. Smart Grids: From Innovation to Deployment COM 202. EURlex. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2011:0202:FIN:EN:PDF. Floridi, Luciano. 2013. The Ethics of Information. Oxford: Oxford University Press. Garattini, Chiara, Jade Raffle, Dewi N. Aisyah, Felicity Sartain, and Zisis Kozlakidis. 2017. Big Data Analytics, Infectious Diseases and Associated Ethical Impacts. Philosophy & Technology 32 (1): 69–85. https://doi.org/10.1007/s13347-017-0278-y. Gigerenzer, Gerd. 2010. Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality. Topics in Cognitive Science 2 (3): 528–554. https://doi.org/10.1111/j.1756-8765.2010.01094.x. Gigerenzer, Gerd, and Henry Brighton. 2009. Homo Heuristicus: Why Biased Minds Make Better Inferences. Topics in Cognitive Science 1 (1): 107–143. https://doi.org/10.1111/j.1756-8765 .2008.01006.x. Grossman, Gene M., and Alan B. Krueger. 1995. Economic Growth and the Environment. The Quarterly Journal of Economics 110 (2): 353–377. https://doi.org/10.2307/2118443. Habermas, Jurgen, and Seyla Ben-Habib. 1981. Modernity Versus Postmodernity. New German Critique 22: 3. https://doi.org/10.2307/487859. International Monetary Fund. 2014. Redistribution, Inequality, and Growth. International Monetary Fund. https://www.imf.org/external/pubs/ft/sdn/2014/sdn1402.pdf. Kant, Immanuel, and Hermann Klenner. 1988. Immanuel Kant Rechtslehre Schriften Zur Rechtsphilosophie. Berlin: Akademie Verlag. Knight, Will. 2021. AI Can Do Great Things—If It Doesn’t Burn the Planet. Wired, January 21. Lebech, Mette. 2004. What Is Human Dignity? Maynooth Philosophical Papers 2: 59–69. https:// doi.org/10.5840/mpp200428. ———. 2009. On the Problem of Human Dignity. Würzburg: Königshausen & Neumann Studien. May, James R., and Erin Daly. 2020. The Role of Human Dignity in Achieving the UN Sustainable Development Goals. Global Environmental Law Annual 2021: 59–76. Milchram, Christine, Rafaela Hillerbrand, Geerten van de Kaa, Neelke Doorn, and Rolf Künneke. 2018. Energy Justice and Smart Grid Systems: Evidence from the Netherlands and the United Kingdom. Applied Energy 229: 1244–1259. https://doi.org/10.1016/j.apenergy.2018.08.053. Morley, Jessica, Caio C.V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo, and Luciano Floridi. 2020. The Ethics of AI in Health Care: A Mapping Review. Social Science & Medicine 260: 113172. https://doi.org/10.1016/j.socscimed.2020.113172. Mozaffari, Mohammad Hossein. 2011. Human Dignity: An Islamic Perspective. An International Journal of Academic Research 54 (4): 2–15. Nordling, Linda. 2019. A Fairer Way Forward for AI in Health Care. Nature 573–7775: S103–S103. Nussbaum, Martha C. 2001. The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy. Cambridge: Cambridge University Press. Peirce, Charles S., and Carolyn Eisele. 1985. Historical Perspectives on Peirce’s Logic of Science. Berlin: Mouton Publishers. Quran. Al Isra: 70. Raworth, Kate. 2017a. A Doughnut for the Anthropocene: Humanity’s Compass in the 21st Century. The Lancet Planetary Health 1 (2): e48–e49. https://doi.org/10.1016/s2542-5196(17)30028-1. ———. 2017b. Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist. White River Junction: Chelsea Green Publishing. Rockström, Johan, Will Steffen, Kevin Noone, Persson Åsa, F. Stuart Chapin III, Eric Lambin, Timothy M. Lenton, et al. 2009. Planetary Boundaries: Exploring the Safe Operating Space for Humanity. Ecology and Society 14 (2). https://doi.org/10.5751/es-03180-140232. Schokkaert, Erik. 2019. Review of Kate Raworth’s Doughnut Economics. London: Random House, 2017, 373 p. Erasmus Journal for Philosophy and Economics 12(1): 125–132. https:// doi.org/10.23941/ejpe.v12i1.412. Schreiber, Rudolf. 2004. Neue Wege Im Naturschutz. Blog. ASK. https://www.ask-eu.de/ News/6787/Neue-Wege-im-Naturschutz.htm.
Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI…
131
Schulze, Karsten, and Rainer Schretzmann. 2006. Wald mit Zukunft: nachhaltige Forstwirtschaft in Deutschland. Die Deutsche Nationalbibliothek. https://www.deutsche-digitale-bibliothek.de/ item/AQPG5LIPG2LBMNZNS5FNZ5PG2ZL5MTEA. Accessed 31 Jan 2022. Simon, Herbert A. 1986. Rationality in Psychology and Economics. The Journal of Business 59 (S4): S209. https://doi.org/10.1086/296363. Sovacool, Benjamin K., Mari Martiskainen, Andrew Hook, and Lucy Baker. 2019. Decarbonization and Its Discontents: A Critical Energy Justice Perspective on Four Low-Carbon Transitions. Climatic Change 155 (4): 581–619. https://doi.org/10.1007/s10584-019-02521-7. Spindler, Edmund A. 2013. The History of Sustainability the Origins and Effects of a Popular Concept. In Sustainability in Tourism, ed. Jenkins I. Schröder, 9–31. Wiesbaden: Springer Gabler. State Council. 2014. Shehui Xinyong Tixi Jianshe Guihua Gangyao. People’s Republic of China. Steffen, Will, Katherine Richardson, Johan Rockström, Sarah E. Cornell, Ingo Fetzer, Elena M. Bennett, Reinette Biggs, et al. 2015. Planetary Boundaries: Guiding Human Development on a Changing Planet. Science 347 (6223). https://doi.org/10.1126/science.1259855. Tronto, Joan. 1993. Moral Boundaries. New York: Routledge. UNDP. 2022. Sustainable Development Goals | United Nations Development Programme. https:// www.undp.org/sustainable-development-goals. van der Sloot, Bart. 2015. Privacy as Personality Right: Why the ECtHR’s Focus on Ulterior Interests Might Prove Indispensable in the Age of “Big Data”. Utrecht Journal of International and European Law 31 (80): 25–50. https://doi.org/10.5334/ujiel.cp. Veblen, Thorstein. 1898. Why Is Economics Not an Evolutionary Science? The Quarterly Journal of Economics 12 (4): 373. https://doi.org/10.2307/1882952. Whitman, James Q. 2004. The Two Western Cultures of Privacy: Dignity Versus Liberty. The Yale Law Journal 113 (6): 1151. https://doi.org/10.2307/4135723.
The Role of AI in SDG: An African Perspective Steve A. Adeshina and Oluwatomisin Aina
Abstract Artificial intelligence (AI) has revolutionized different sectors and will not be an exception in enabling the Sustainable Development Goals (SDGs). The focus of SDGs is to provide a “blueprint to achieve a better and more sustainable future for all by 2030”. The 17 SDGs are interconnected and achieving one of the goals creates a ripple effect in the other goals. These ripple effects can be created when stakeholders use past information (data) from the societal, environmental, and economic factors, observing patterns and proffering actionable solutions. AI is a key driver that can leverage this data to create revolutionary approaches in solving global problems. For example, women have developed AI-based systems to ensure healthy lives and promote well-being. These systems that can detect breast and cervical cancer have been developed by researchers. From an African standpoint, a significant area where the role of AI will have an immense contribution is in constituting institutions. These institutions will uphold peace and justice and promote the rule of law at all national levels by building solid institutions using data. AI can leverage blockchain and distributed ledgers to support voting and boost voters’ integrity. These, among other projects where AI systems have been deployed and can be potentially integrated, will be further discussed in this chapter. Keywords Africa · Artificial intelligence · Good health · Peace and justice · SDG
1 Introduction The Sustainable Development Goals (SDGs) developed by the United Nations (UN) in 2015 were introduced to enable countries achieve a better and more sustainable future for all by 2030. The SDGs, a sequel to the Millennium Development Goals S. A. Adeshina (*) · O. Aina Nile University of Nigeria, Abuja, Nigeria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_8
133
134
S. A. Adeshina and O. Aina
(MDGs), were introduced to complete the goals that MDGs could not accomplish. The progress made by MDGs was uneven as vulnerable countries (i.e., African countries, landlocked and least developed) could not realize these goals. The UN then committed to creating more robust goals, ensuring that it encompasses the three significant dimensions required for sustainable development: economic, social, and environmental (Lee et al. 2016). It is essential to note that SDG is not overriding the African Union (AU) Agenda 2063 and the New Partnership for Africa’s Development (NEPAD); however, it is collaborative in achieving Africa’s goal with its 17 goals and 169 targets. These targets consider various countries’ circumstances, and it is observed that it incorporates Africa’s goals. The authors in Waal (2002) and DeGhetto et al. (2016) give a good summary of the AU Agenda 2063 and NEPAD. As earlier stated, the 17 SDGs are interconnected, and a ripple effect is created when one of the goals is achieved. Therefore, it can be assumed that of all the goals, one will have a more significant effect when compared to the others. The goal whose effect will transcend all others is goal 16, which aims to “promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels” (Lee et al. 2016). This goal aims to ensure government institutions are effective and accountable, bribery and corruption are reduced, and the rule of law is promoted among others – in summary, ensuring good governance is fostered in a nation. Governance is the means of steering sustainable development as it involves collaborations between stakeholders in policy-making and implementation (Meadowcroft 2007; van Zeijl- Rozema et al. 2008). Out of the central aspects in governance for sustainable development (participation, policy coherence, reflexivity and adaptation, and democratic institutions), the authors proposed that democratic institutions and participation are the most significant (Glass and Newig 2019). The democratic institution is characterized by a seamless electoral process, adequate access to information, the rule of law and civil rights, and political liberties. However, true democracy and governance have been observed as significant reasons why African countries struggle to achieve their developmental goals (Chimhowu and Hulme 2013). In Africa, the electoral process is prone to election violence before, during, and even after elections. For example, during the elections in Nigeria in 2019, some of the triggers of election violence included hate speech, insurgence and insecurity, and hijacking of electoral materials, among others. Other triggers that hinder election integrity include money laundering and misappropriation of funds. It is therefore pertinent to find ways to reduce these triggers to guarantee that the SDG 16 goal and targets are realized. Another important SDG goal that significantly impacts other goals is SDG 3, which focuses on “ensuring healthy lives and promoting well-being for all ages.” Good health and well-being are both a catalyst of SDG goals and an inheritor of the lack of SDG. As a catalyst, good health is necessary to actualize the SDG goals as the population (individuals) of a nation will be required to ensure the
The Role of AI in SDG: An African Perspective
135
implementation of these goals. For instance, Goal 7 focuses on providing affordable, reliable, sustainable, and modern energy, which requires engineers and researchers to drive this focus. If these individuals are unhealthy, the goal cannot be achieved optimally. The need for human capital is not only subject to SDG. It also applies across the board as human beings are the most important driving force for achieving these goals. As an inheritor, failure to achieve some of these SDG goals will have repercussions on the well-being of individuals. For example, SDG 12 focuses on combating climate change and its impacts. If not achieved by reducing pollutants among other targets, it could cause cancer, resulting in over seven million individuals yearly (Laar et al. 2019). This shows that failure to achieve some of these goals cycles back to SDG 3 (health and well-being), inhibiting its development. One of the main factors inhibiting good health in Africa is that the ratio of the population to trained health practitioners is low. According to Naicker et al. (2010), for a population of 10,000 patients, the ratio of available doctors to nurses/midwives is 2:11, which is low compared to Europe and America with a ratio of 32:78 and 19:49, respectively. These, however, result in burnout and, in some cases, prevent access of patients to health care. Another critical factor hindering well-being in Africa is the low allocation of resources. In 2001, a consensus was made by the African Union States in Abuja that 15% of the government budget should be used for the health sectors to ensure that citizens can afford health care and reduce out- of-pocket payments. Unfortunately, it is seen that most countries in Africa spend less than 7% of their GDP on health (Organization 2013). As a result of low financing, the maintenance or development of infrastructure that can aid diagnosis and treatment is limited. One of the most globally recognized leaders in AI, Andrew Ng gave a remarkable quote about the field. He said, “Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years” (Ng 2018). AI has the potential to foster or inhibit the development of SDG. The authors (Vinuesa et al. 2020) showed that AI can be a double-edged sword as it can act as an enabler of the SDGs targets by 75% and also cause a negative impact on these targets. A notable way AI has helped achieve SDG is using a mobile app, PlantVillage Nuru, in Tanzania. The app is used to diagnose diseases and pests on cassava plants using a mobile phone that does not require Internet services (Mrisho et al. 2020). As a result, it potentially reduces hunger (SDG 2) and fosters innovation (SDG 9). Another outstanding way AI has been integrated with SDG is using clean water AI Test Systems (Agrawal et al. 2018). It performs real-time analysis without the Internet to detect contamination. This solution is not only peculiar to developing countries like Africa. As a result, SDG 6 aimed at water and sanitation, SDG 3 on good health and well-being, and SDG 9 on innovation are somewhat achieved. In subsequent sections of this chapter, the focus will be on the role of AI in achieving SDG from an African perspective. Also, factors hindering the adoption of AI in Africa, challenges, and future recommendations will be discussed in detail.
136
S. A. Adeshina and O. Aina
1.1 AI and SDG One of the most significant ways AI is gradually introduced to African economies is the development of strategies or blueprints to democratize AI in various sectors. For example, Mauritius was the first Sub-Saharan African country with a published national strategy. It is an essential step in the right direction because this action plan helps countries deliberate, make, and document decisions that suit the peculiarities of each country. Other countries with well-defined strategies are Zambia, Tunisia, Botswana, and Egypt. Other countries may not have a well-defined strategy. Still, it has been observed that countries are gradually initiating task force, agencies, and commissions to find specific ways of integrating AI in their economies. Another critical factor that must be considered in implementing these strategies to achieve the set goals is the “How.” It defines implementation methods that rely mainly on individuals’ skills and expertise. It is essential to have skilled and capable individuals who can implement AI technologies to achieve SDG goals. One significant way is educating indigenous citizens to gain more skills on how this can be done. It makes human capital readily available and, most importantly, cost-effective. Indeed, the initial investment in setting up education specified for AI in universities, training, and research institutions is expensive. However, it is cheaper in the long run as expatriates do not need to provide these solutions. It eliminates the need to import devices or integrate AI. In addition to achieving the SDG goals, it also increases its GDP. For example, Egypt, South Africa, Morocco, and Rwanda have designated faculty and centers in some of their university to train and provide knowledge on AI and its applications. In other countries like Ethiopia and Senegal, they collaborate with entrepreneurs, foreign universities, and AI entrepreneurs to ensure knowledge gaps are filled. For example, in 2019, a malaria challenge was proposed by IBM Research Africa to participants in Indaba, an organization with the aim of strengthening AI and machine learning in Africa.1 Also, collaboration across AI-focused groups like Data Science Africa, Women in Machine Learning, and others have increased research output and helped bridge knowledge gaps. In addition, countries like Nigeria have strong AI communities that provide knowledge and education, mentorship, internship, and collaborations between different stakeholders.
1.2 AI and SDG 3: Current State in Africa Today Many companies (private and public) are gradually leveraging AI in health practices to ensure citizens’ good health and well-being. One of the most AI-savvy countries in Africa, Rwanda collaborated with Zipline by using drones to deliver blood packs to remote areas within 75 km of the https://zindi.africa/competitions/ibm-malaria-challenge
1
The Role of AI in SDG: An African Perspective
137
distribution centers. In addition, it utilized instant messaging (WhatsApp) to make requests, GPS navigation, and air traffic control for delivery. This development has saved lives, particularly women who lose blood during a Cesarean section, ensuring equal distribution of blood as some hospitals have packs of unused blood. In contrast, some do not have (Ackerman and Strickland 2018). Rwanda seems to be investing a lot into digital health as the country has a 10-year partnership with Babylon (Babyl). This UK-based company makes health care accessible and affordable. As a result of the collaboration, the created Babyl in Rwanda provides virtual consultations with doctors and experts, appointment booking, and prescription delivery and offers references when needed to the citizen. The platform aims to use AI to analyze symptoms and provide follow-up treatments to citizens in the future. An exciting part of information about Babyl is that citizens can access prescriptions and lab tests with insurance cards. During the Covid-19 period, the Rwandan government launched robots to screen up to 150 people in 1 minute if they have a fever and report abnormal cases to officers. These robots also can detect and correct individuals not wearing masks correctly. Zoro Bots made these robots which cost $30,000/robot, a Belgian company specializing in making robots.2 Ghana also collaborated with Zipline3 to deliver masks and personal protective equipment (PPE) during the election in December 2020. The drones were able to deliver PPE to 29 polling units within fifteen (15) hours, saving 40%. In tackling the shortage of radiologists in Egypt, a platform, Rology4 was created that remotely matches images from the hospital to radiologists. This was particularly useful during the Covid-19 when many cases needed diagnosis by radiologists. In Kenya, a company, Tambua Health,5 developed ultrasound scanners that provide a point of care for medical diagnosis. It leverages deep learning and acoustics to diagnose respiratory diseases. These devices generate images from the sounds of the heart and lungs. Also, in Kenya, AfyaRekod,6 a digital platform based on AI and blockchain technology, was developed to capture and store patients’ data available to health facilities in real-time. It provides data-driven insights to help doctors make decisions and provide better patient health care. Also, in South Africa, a startup hearX Group developed a smart hearing aid to tackle hearing loss and provide ear care solutions. The startup also provides a self-test platform for hearing tests (Kriel 2018). These examples are notable ways AI has been integrated in health and is currently in operation. However, with the number of investments been made in research, the number of platforms, apps, and devices related to health care will significantly improve over the next decade.
https://www.afro.who.int/news/robots-use-rwanda-fight-against-covid-19 https://assets.ctfassets.net/pbn2i2zbvp41/3yrQaMNdJ1u1J2aSEucjzt/4412ea5d12896d15b7eb4 1a2212d0295/Zipline_Ghana_PPE_Global_Healthcare_Feb-2021.pdf 4 https://rology.health/products-and-services/ 5 https://www.tambuahealth.com/scanners 6 https://afyarekod.com/ 2 3
138
S. A. Adeshina and O. Aina
1.3 AI and SDG 16: Current State in Africa Today One of the ways AI is being used to ensure strong institutions are built through biometrics. Biometric identification systems have currently been used in various African countries (Democratic Republic of the Congo (DRC), Uganda, Angola, Nigeria) for voters registration. Voters’ registration over the years has been used to minimize electoral registration by integrating fingerprint identification and pictures of voters. It eliminates “ghost voters,” duplicates registrations, quickens the voter’s process, and encourages citizens to participate as it offers some sort of election integrity. In Zimbabwe, 77% of names were removed from the voter’s rolls in 2018 (Marumahoko 2020). E-Collection of results in Nigeria was set up to collate results automatically. In contrast, the manual collation of results has been carried out. However, this failed as a result of poor network coverage to receive results from various parts of the country, security challenges as some results were even hijacked before results could be sent, and poor infrastructure. Other standard AI tools used by African countries are drones and satellites to take aerial view videos or pictures of areas in real-time. It is beneficial during elections to prevent electoral violence and rigging. Once an irregular pattern is noticed in an area during the survey, security agencies can send an alert to curb irregularities. The UN Global Pulse Lab leveraged one of the AI techniques, natural language processing (NLP), to analyze radio and social media data. These data were used to identify trending topics that could hinder peace among the citizens via fake news and detect social tensions and misconceptions that could cause conflicts among citizens. This is useful as it helps government agencies and takes action at the nick of time before things escalate. Also, the team developed a tool called QataLog, also used to extract, analyze, and visualize data from radio and social media (Pulse 2018). These are some practical areas where AI has been used in ensuring peace, justice, and strong institutions are built. It was observed that most technologies are currently at the conceptual stages (proof of concept) and have not yet been implemented. It is hoped that the focus will be on the implementation in the next decade rather than just writing research papers.
2 Challenges of Integrating AI in the SDGs According to reports in Insights (2020), it was stated that sub-Saharan Africa scored the lowest in the Government AI Readiness Index, while the United States topped the list. This index was based on the government’s willingness and ability to adopt AI and the availability of technological skills and tools (high-quality data and infrastructure). As observed from the list, Mauritius, South Africa, and Seychelles were the top three countries in Africa. However, Mauritius ranked 45 in the world with 53.86%. From this report, Africa has a long way to go in integrating AI in most sectors of the economy as there are some mitigating factors still hindering its adoption.
The Role of AI in SDG: An African Perspective
139
“If you Fail to Plan, you are planning to fail” – this famous quote made by Benjamin Franklin many years ago summarizes the effect of not having a strategic plan for implementing AI. As mentioned earlier, many countries have yet to have a published blueprint that guides decisions, collaborations, and implementation of AI to suit each country’s needs. Therefore, there is no concrete governance or framework to ensure ethical issues and accountability. It is a sensitive issue, especially for the health sector, where activities must be backed up with accountability. In addition, implications or rules must be adhered to in the event of an adverse effect occurring due to inappropriate use of AI. AI can be misused, which could spark violence and even a coup if not adequately managed. Recently in Gabon, there was a controversy about a video of the president released by the government when there were rumors regarding the president’s health. The military government staged an attempted coup, although it failed. Some opposition to the government claimed that the video was forged and was created using deepfake. Deepfake is a synthetic creation of media using deep learning (one of the most prominent AI techniques). It was later resolved by experts who debunked the rumor that it was deepfake. If stringent rules are not implemented to fish out and punish citizen offenders primarily, there will be a continuous cycle. Technology like deepfakes will be used, spark controversies leading to violence or coup, and hinder the rule of law. During the electoral process, biometric identification and fingerprints have contributed to ensuring that governance is free and fair. However, according to reports in Adeniyi and Adeshina (2019), there have been incidences of underage voters during registration in Nigeria. The field agents are saddled with deciding if a voter should be registered or not. As a result, it sparked controversy leading to citizens questioning the biometric technology integrated with the electoral process. In Kenya, biometric identification systems led to arguments between political players. It was believed that the accreditation system before voting was sabotaged (Jacobsen 2020). These incidences could lead to skepticism in other areas like health care, where citizens will not trust the government enough to be involved in providing AI solutions. It is a known fact that AI technologies thrive on data as the effectiveness of a proffered solution is dependent on the quality of data used. Unfortunately, data is not readily accessible and available in most African countries. In some instances, some organizations, e.g., diagnosis and test centers, may not understand the usefulness of data received. As a result, it will not have organized infrastructure and data management resources to utilize the available data. Therefore, most African researchers are forced to use foreign-based datasets, which may not be a true reflection of the peculiarities common to African society. Another challenge common to creating natural language processing applications is the diversity of African languages, thus limiting available training data. Africa contributes about 30.15% of languages globally, about 2000 languages from the continent alone (Orife et al. 2020). African languages are highly complex and very difficult to generalize, like English. They have diverse lexical and grammatical tone patterns, phonologies, and morphologies. A lot of investment is therefore required in creating databases that are cost-intensive.
140
S. A. Adeshina and O. Aina
The cost of integrating AI can be considered as one of the significant challenges the continent is facing. Cost transcends the entire research cycle from data acquisition down to maintenance and upgrading of these solutions. Data acquisition involves collection and includes cleaning and annotation, which require the expertise of individuals in the domain fields of the data used. Engaging this expertise either by using locally based experts (which are already limited in supply) or outsourcing to organizations is expensive. Computation resources used to build these AI solutions are expensive as high computing servers, computers, and cloud computing services (Azure, Google Cloud) are required. These platforms require a stable infrastructure for optimal use, i.e., electricity and Internet. For example, training a machine learning model on a cloud platform could take days and require good Internet and stable electricity. Unfortunately, this is not readily available in Africa.
3 Discussions and Way Forward In the previous sections of this chapter, the current ways AI has been integrated into achieving SDGs 3 and 16 and the challenges have been highlighted in detail. However, in this section, the focus will be on things that can be done to foster AI. According to reports in Botero et al. (2021), sub-Saharan Africa and South Asia scored the lowest in the Rule of Law Index (RoLI) 2021 in the world. However, Rwanda topped the list as the country in Africa with the best rule of law. The report also defined the rule of law based on the following universal principles: accountability, just law, open government, and accessible and impartial justice. These principles are further developed or analyzed based on constraints on government powers, absence of corruption, open government, fundamental rights, order and security, regulatory enforcement, and civil and criminal justice. Hence, one of the ways of improving the rule of law is digitizing the court systems. Digitizing court systems will improve accessibility to civil justice, reduce delays in resolving cases, improve transparency, ensure accountability, and reduce corruption. Improving these factors will go a long way in ensuring peace and justice are achieved as issues hindering the rule of law will be tackled. The authors in Finucan et al. (2018) proffered digital tools such as creating a resource planning system, using cloud-based tools for archiving, using online tools to provide information, and offering virtual help desks. It is seen that most collaborations and partnerships that are still in operation are usually with companies based in foreign-based countries. To the best of the author’s knowledge, the number of AI companies in Africa that provide devices that can promote peace, justice, strong institutions, and health is very few. It has been observed that most collaborations usually between foreign-based research AI companies are expensive. Therefore, AI researchers need to go beyond just writing papers for publications based on research. It is also paramount for them to look at ways to implement these solutions using locally sourced materials.
The Role of AI in SDG: An African Perspective
141
One way of reducing the dependence on foreign-based AI-based devices is leveraging mobile phones due to their gradual penetration in Africa. It is predicted that smartphone connections or penetration will almost be doubled to about 678 million by the end of 2025.7 The numbers seem promising. However, researchers must explore building machine learning-based algorithms that can run effectively on mobile without needing high computational resources. The government, private, and public agencies need to invest in AI education and collaborations. This education should not be limited to individuals who work as AI engineers or researchers. It should be included in the school’s curriculum right from the primary level so that the next generation will have a basic understanding of AI and its application. Zipline has contributed to fostering AI in the health and electoral process in both Rwanda and Ghana. Other African countries can take a cue from these countries and be beneficiaries. For example, information from the Zipline’s website8 shows that Cote d’Ivoire is also collaborating with Zipline to deliver blood, vaccines, and medical products by the end of 2022. Also, one of the states in Nigeria, Kaduna, signed a deal with Zipline to deliver Covid-19 vaccines and other medical products (blood and medicine) in the state. This deal will help the government eliminate the need to purchase ultra-low freezers and foster quick delivery of the vaccines. The interesting thing about the uptake of AI is that these recommendations focused on SDGs 3 and 16 have a significant impact on the other SDGs. For example, in making more investments in education focused on AI, SDG 4 aimed at providing quality education, SDG 9 to foster innovation and building infrastructure, and SGD 19 to strengthen the global partnership for sustainable development were impacted positively. Also, due to quality education, jobs are created due to innovation resulting in full and productive employment, improving the economy (SDG 9) and eliminating poverty and hunger (SDG 1). This is just a brief illustration of the importance of these SDGs and their impact on other SDGs.
4 Conclusion The potential of AI in fostering and inhibiting SDGs 3 and 16 has been discussed in the previous sections of this chapter. In addition, ways of mitigating the factors inhibiting AI deployment in achieving goals were also discussed. This shows that more effort needs to be carried out by various stakeholders (i.e., government, investors, citizens, and international bodies) to ensure that the potentials of AI are optimally maximized. It is believed that with gradual reformations and implementation, Africa will be on its way to being a global central AI hub.
https://www.gsma.com/mobileeconomy/wp-content/uploads/2020/09/GSMA_MobileEconomy 2020_SSA_Eng.pdf 8 https://flyzipline.com/press/zipline-to-expand-to-ivory-coast/ 7
142
S. A. Adeshina and O. Aina
References Ackerman, Evan, and Eliza Strickland. 2018. Medical Delivery Drones Take Flight in East Africa. IEEE Spectrum 55 (1): 34–35. Adeniyi, Ahmed A., and Steve A. Adeshina. 2019. Automatic Age Classification of Prospective Voters Using Deep Convolutional Neural Network. Paper read at 2019 15th International Conference on Electronics, Computer and Computation (ICECCO). Agrawal, Ajay, Joshua Gans, and Avi Goldfarb. 2018. Prediction Machines: The Simple Economics of Artificial Intelligence, AI-Driven Test System Detects Bacteria in Water. Intel Harvard Business Press. Botero, Juan Carlos, Mark David Agrast, and Alejandro Ponce. 2021. The World Justice Project (WJP) Rule of Law Index. Washington, DC: World Justice Project. Chimhowu, Admos, and David Hulme. 2013. Africa and the MDGs: Challenges and Priorities. In The Millennium Development Goals and Beyond, 170–181. Routledge. DeGhetto, Kaitlyn, Jacob R. Gray, and Moses N. Kiggundu. 2016. The African Union’s Agenda 2063: Aspirations, Challenges, and Opportunities for Management Research. Africa Journal of Management 2 (1): 93–116. Finucan, Logan, EB Sierra, and N Rajesh. 2018. Smart Courts: Roadmap for Digital Transformation of Justice in Africa. Glass, Lisa-Maria, and Jens Newig. 2019. Governance for Achieving the Sustainable Development Goals: How Important Are Participation, Policy Coherence, Reflexivity, Adaptation and Democratic Institutions? Earth System Governance 2: 100031. Insights, Oxford. 2020. Government AI Readiness Index 2020. Ottawa: IDRC. Retrieved October 1, 2020. Jacobsen, Katja Lindskov. 2020. Biometric Voter Registration: A New Modality of Democracy Assistance? Cooperation and Conflict 55 (1): 127–148. Kriel, Glenneis. 2018. Hearing Tests Go Digital. Finweek 2018(10): 12–12. Laar, Amos K., Alma J. Adler, Agnes M. Kotoh, Helena Legido-Quigley, Isabelle L. Lange, Pablo Perel, and Peter Lamptey. 2019. Health System Challenges to Hypertension and Related Non- Communicable Diseases Prevention and Treatment: Perspectives from Ghanaian Stakeholders. BMC Health Services Research 19 (1): 1–13. Lee, Bandy X., Finn Kjaerulf, Shannon Turner, Larry Cohen, Peter D. Donnelly, Robert Muggah, Rachel Davis, Anna Realini, Berit Kieselbach, and Lori Snyder MacGregor. 2016. Transforming Our World: Implementing the 2030 Agenda Through Sustainable Development Goal Indicators. Journal of Public Health Policy 37 (1): 13–31. Marumahoko, Sylvester. 2020. Biometric Voter Registration, Zimbabwe. Meadowcroft, James. 2007. Who Is in Charge Here? Governance for Sustainable Development in a Complex World. Journal of Environmental Policy & Planning 9 (3–4): 299–314. Mrisho, Latifa M., Neema A. Mbilinyi, Mathias Ndalahwa, Amanda M. Ramcharan, Annalyse K. Kehs, Peter C. McCloskey, Harun Murithi, David P. Hughes, and James P. Legg. 2020. Accuracy of a Smartphone-Based Object Detection Model, PlantVillage Nuru, in Identifying the Foliar Symptoms of the Viral Diseases of Cassava–CMD and CBSD. Frontiers in Plant Science 11: 1964. Naicker, Saraladevi, John B. Eastwood, Jacob Plange-Rhule, and Roger C. Tutt. 2010. Shortage of Healthcare Workers in Sub-Saharan Africa: A Nephrological Perspective. Clinical Nephrology 74: S129–S133. Ng, Andrew. 2018. AI Is the New Electricity. O’Reilly Media. Organization, World Health. 2013. State of Health Financing in the African Region. Orife, Iroro, Julia Kreutzer, Blessing Sibanda, Daniel Whitenack, Kathleen Siminyu, Laura Martinus, Jamiil Toure Ali, Jade Abbott, Vukosi Marivate, and Salomon Kabongo. 2020. Masakhane—Machine Translation For Africa. arXiv preprint arXiv:2003.11529. Pulse, UN Global. 2018. Experimenting With Big Data and Artificial Intelligence to Support Peace and Security.
The Role of AI in SDG: An African Perspective
143
van Zeijl-Rozema, Annemarie, Ron Cörvers, René Kemp, and Pim Martens. 2008. Governance for Sustainable Development: A Framework. Sustainable Development 16 (6): 410–421. Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 1–10. Waal, Alex de. 2002. What’s New in the ‘New Partnership for Africa’s Development’? International Affairs 78 (3): 463–476.
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs): An Inclusive Democratized Low-Code Approach Meng-Leong How, Sin-Mei Cheah, Yong Jiet Chan, Aik Cheow Khor, and Eunice Mei Ping Say Abstract Despite the world becoming more interconnected than ever before, inequality and poverty continue to pose a threat to sustainable development. In response to these challenges, the United Nations Educational, Scientific and Cultural Organization (UNESCO) promotes Global Citizenship Education (GCED), which aims to instill values, attitudes, and behaviors in people so that they may consider the importance of responsible global citizenship – a concept that entails creativity, innovation, and dedication to peace, human rights, and sustainable development, among others. The GCED program raises the awareness of students of all ages to recognize that these issues are global in nature rather than localized and encourage them to participate actively in contributing to a peaceful, tolerant, inclusive, safe, and sustainable society. This research demonstrates how a user-friendly, low-code, human-centric probabilistic strategy can be utilized to democratize artificial intelligence (AI) usage, thus allowing analysts who are not computer scientists to use AI for social good. This reasoning approach can be useful in the predictive modeling of social issues that GCED is concerned with, which are demonstrated by the examples: (1) promoting global sustainable development, (2) alleviating malnutrition, (3) increasing financial inclusion for people who are underserved by traditional banking institutions, and (4) strengthening food security resilience. Keywords Global citizenship education · AI for social good · Bayesian networks · Artificial intelligence
M.-L. How (*) The University of Newcastle, Australia, Callaghan, Australia e-mail: [email protected] S.-M. Cheah Singapore Management University, Singapore, Singapore Y. J. Chan · A. C. Khor · E. M. P. Say Monash University, Melbourne, Australia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_9
145
146
M.-L. How et al.
1 Introduction Even as the world becomes increasingly interconnected, the effects of inequality and poverty on development and sustainability persist. The Unified Nations Educational, Scientific and Cultural Organization (UNESCO) has been advocating Global Citizenship Education (GCED) in an effort to foster responsible global citizenship through creativity, innovation, and a commitment to peace (Pigozzi 2006). In this way, students of all ages would be able to appreciate that these problems are global in scope. Those who participate in it are encouraged to actively support more peaceful and inclusive societies. The premise is that students of all ages must be educated appropriately to become responsible global citizens. Increasingly, artificial intelligence (AI) is being used for greater good (Floridi et al. 2020). Although it is difficult for educators or social scientists who are not trained in computer science to code and implement AI algorithms, it is nevertheless possible to do so. An easy-to-use probabilistic strategy for augmenting human thought is demonstrated in this research. This inclusion empowers analysts who are not computer scientists to apply AI for predictive modeling of social concerns. Using this human-centric probabilistic approach as a cognitive scaffolding, teachers may encourage students to ask more questions that will help them become responsible global citizens. This chapter examines how a user-friendly low-code human-centric probabilistic approach can be used to democratize AI usage, allowing non-computer scientists to use AI to contribute to Sustainable Development Goals (SDGs). Examples will be presented in Sect. 4 of this chapter to show how this inclusive, low-code, human- centric probabilistic reasoning approach can be utilized in conjunction with AI techniques to harness actionable predictive insights for (1) the improvement of global sustainable development, (2) the amelioration of malnutrition, (3) the advancement of financial inclusion for people who are unserved by traditional banking institutions, and (4) the strengthening of food security resilience. The following is a quick summary of the four examples that will be presented in greater detail in Sect. 4 of this chapter. Example 1 will show how global sustainable development is crucial for humanity’s survival using a predictive model. It is based on a study titled Artificial Intelligence-Enhanced Decision Support for Informing Global Sustainable Development: A Human-Centric AI-Thinking Approach (How et al. 2020b). Environmental health and ecosystem vitality performance indicators from 180 countries were analyzed using an inclusive and democratized no-code AI-enabled approach, revealing hidden tensions between two fundamental dimensions of sustainability: (1) environmental health, which promotes economic growth and increases affluence, and (2) ecosystem vitality, which deteriorates as a result of industrialization and urbanization. Example 2 will examine issues in malnutrition, which is one of the world’s most serious but under-addressed sustainability issues, according to the World Health Organization (WHO) and the World Bank. The second example is based on a study titled Artificial Intelligence-Enabled Predictive Insights for Ameliorating Global
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
147
Malnutrition: A Human-Centric AI-Thinking Approach (How and Chan 2020). Malnutrition is a double burden, because both under- and over-nutrition lead to nutritional dysfunction or imbalance. Malnutrition causes significant and detrimental economic impact on individuals and populations. The growing economic cost of malnutrition that many countries are now facing is compounded by the coexistence of malnutrition and overweight, obesity, or diet-related noncommunicable diseases. Using a World Bank dataset from 180 countries, example 2 shows that AI can be democratized to enable analysts without a background in computer science to use human-centered explainable AI to simulate the dynamics between malnutrition, health, and population. Example 3 will investigate issues that hinder financial inclusion. It is based on a study titled Artificial Intelligence-Enhanced Predictive Insights for Advancing Financial Inclusion: A Human-Centric AI-Thinking Approach (How et al. 2020c). According to the World Bank, financial inclusion is critical to poverty reduction and fostering prosperity. Many of the most vulnerable groups, such as low-income families and microbusinesses, have little to no access to financial services. With the growing demand for inclusive financial solutions, AI has the potential to play a substantial role in assisting financial service providers (FSPs) in better understanding potential consumer needs. The issue that AI must solve should be centered on the customer, not the product, nor on increasing revenue for the FSP. AI is uniquely suited to assisting FSPs in understanding the preferences of prospective customers because it is capable of rapidly deciphering data patterns that humans may struggle to analyze, especially when planning for unforeseen emergencies. Example 4 will provide insights that can improve food security, which has become a greater global concern as pressures on the food system increase. It is based on a study titled Predictive Insights for Improving the Resilience of Global Food Security Using Artificial Intelligence (How et al. 2020a). A probabilistic approach based on human-centric artificial intelligence can be used to develop a predictive model from quantitative and qualitative data from the Global Food Security Index (GFSI). Using predictive modeling on the GFSI dataset, an inclusive AI-based approach is used to deduce relationships between food affordability, food availability, food quality and safety, and natural resource resilience.
2 Research Problem and Research Questions When it comes to analyzing the factors that influence global citizenship education, AI-based approaches are beneficial. There are also numerous projects that encourage the use of AI for social good (Taddeo and Floridi 2018). The problem, however, is that people who do not have computer programming abilities found it difficult to apply AI to examine data. To address this issue, the research questions that will guide this work are as follows:
148
M.-L. How et al.
RQ1: How can analysts who are not trained in computer science use AI to examine data for the benefit of society? RQ2: Is there a low-code, inclusive, and democratized strategy to applying artificial intelligence that does not necessitate considerable computer software programming?
3 Methods To provide actionable predictive insights that can be used to contribute to social good, an inclusive, democratized, low-code AI-enhanced technique can be deployed. To demonstrate how an inclusive low-code human-centric probabilistic reasoning technique can be applied in the predictive modeling of social issues that are concerned with GCED, several case studies are explored. They are (1) promoting global sustainable development, (2) alleviating hunger, (3) advancing financial inclusion for people who are underserved by traditional banking institutions, and (4) increasing food security resilience. In addition to providing extensive customization of variables and artificial intelligence algorithms, low-code tools offer speedy drag-and-drop functions in predictive simulations (Chang and Ko 2017). Humans can concentrate on the all-important big picture, logical reasoning, and productive collaborative discussions with stakeholders across multiple domain verticals via “what-if” scenarios for optimizations and risk assessments, while AI-enabled software augments human ingenuity by taking care of the heavy lifting of making everything run smoothly in simulations. Historically, AI was more closely affiliated with computer science departments in universities than with faculties devoted to sustainability research. In recent years, AI has gained a larger foothold in business. This demonstrates the critical importance of teaching individuals not just in problem-solving that utilizes key notions from any one profession but also in AI. AI supports analysts who are not computer scientists by helping them ask better questions. Apart from computer science departments, educators from a variety of academic disciplines have been attempting to introduce students to popular AI concepts such as machine vision, natural language processing (NLP), machine learning (ML), deep learning (DL), or reinforcement learning (RL). Some students were also trained to create artificial neural networks (ANN), recurrent neural networks (RNN), convolutional neural networks (CNN), or generative adversarial networks (GAN). However, many AI applications still operate like black boxes, as Correa et al. (2009) have pointed out. They noted that the interactions between the nodes (or variables) in an artificial neural network may be compared to a black box. These interactions are either concealed from the user or are simply too complicated for the layperson to comprehend. To supplement the work of researchers and analysts who are focused on sustainability-related issues, but are not computer scientists, data that is AI-enhanced and evidence-based would augment their human-centric reasoning abilities. The current study provides an alternative AI-based strategy that may aid in human-centric thinking.
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
149
3.1 Rationale for Using the AI-Based Bayesian Network Approach When it comes to AI-related research, there are numerous tools available. The Bayesian network technique for statistical data analysis is one such useful tool to visualize the relationships between data variables (van de Schoot et al. 2014). A Bayesian network (BN) is a type of probabilistic graphical model that shows the relationships between variables, which can be conditionally dependent or independent. The BN technique is particularly well suited for evaluating non-parametric data because it does not impose the requirement of having a normal parametric distribution in the underlying parameters of a model (How and Hung 2019). Through the use of BN, practitioners may undertake hypothesis testing by incorporating information from previous studies into the current one. When analyzing data using the BN method, it is therefore not necessary to undertake many rounds of null hypothesis testing. The Bayesian approach has also been used by researchers such as Kaplan (2016), Levy (2016), Sperotto et al. (2019), and How (2019) because it enables them to measure information gain, as described in Claude Shannon’s information theory (1953). Shannon’s theory calculates the probabilistic amount of commonality between two data distributions that are not necessarily parametric.
3.2 The Bayesian Theorem An overview of the Bayesian theorem is provided here, but it will not do justice to the rich corpus of BN. Readers who are interested in learning more about BN are recommended to read the works of Cowell et al. (1999), Jensen (1999), and Korb and Nicholson (2010). According to the mathematician and theologian Reverend Thomas Bayes (1763), the mathematical formula, which BN was built upon, is: P H |E
P E |H . P H P E
A hypothesis is represented by H and a piece of evidence is represented by E. P(H|E) is referred to as the conditional probability of the hypothesis H, which refers to the likelihood of the hypothesis H happening if and only if the evidence E is correct. As an alternative, it is referred to as the posterior probability, which refers to what the probability of the hypothesis H being true is after calculating how much the evidence E has an impact on the likelihood that the hypothesis H is correct. Independent of one another, the probabilities of the hypothesis H being true and the likelihood of the evidence E being true are represented by P(H) and P(E),
150
M.-L. How et al.
respectively. P(H) and P(E) are referred to as the prior and marginal probability, respectively. P(E|H) is a probability distribution for the evidence E. In other words, it shows the likelihood of evidence E being true if and only if the hypothesis H is true. The quotient P(E|H)/P(E) reflects the amount of support that the evidence E offers for the hypothesis H.
3.3 The Research Model The major purpose of the current work is to provide one of many viable approaches of educing (drawing out) AI-augmented thinking, in order to guide ongoing research and policymaking. However, the goal of providing the exemplars is not to promote Bayesian network as the best tool for educing AI-augmented thinking, but rather to encourage analysts to consider the trustworthiness of AI-based analysis techniques in general, and hopefully, to exercise AI-augmented thinking when they are discussing AI and sustainability-related issues with their stakeholders. In other words, when it comes to problem-solving, it is much more crucial to ask questions and consider alternative solutions than attempting to arrive at the so-called absolutely correct answer. The probabilistic reasoning methods that were utilized in this chapter were based on the BN. It has been shown to be effective in investigating optimization and predictive modeling of relationships between variables of theoretical constructs, even when they are not physically related. This is because BN can incorporate multi- variable analytical concepts such as Markov blankets (Tsamardinos et al. 2003) and response surface methodology (Myers et al. 2009). Two distinct types of analytics will be shown with the aid of semi-supervised machine learning BN models in examples 1 to 4 in the subsequent sections. The first type is: Descriptive analytics of “what has already occurred?” The purpose of this technique is to employ descriptive analytics to uncover themes within the acquired data. For descriptive analytics, BN modeling will automatically determine the data distribution of each column in the dataset using the parameter estimate procedure.
The second type is: Predictive analytics based on hypothetical “what-if?” scenarios The purpose of this technique is to use predictive analytics for in silico experiments with completely adjustable settings in order to forecast counterfactual consequences. To assist policymakers in their decision-making, a probabilistic Bayesian technique will be utilized to model best- and worst-case scenarios for various sustainability-related levels. In predictive analytics, counterfactual simulations can be used to investigate patterns in the datasets.
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
151
4 Discussion 4.1 Example 1: An Inclusive and Democratized Low-Code Approach of Using AI for Global Sustainable Development 4.1.1 Environmental Performance Index The world is currently undergoing an era of data-driven environmental policymaking. Stakeholders and policymakers are increasingly interested in adopting evidence- based data to inform decision-making as environmental policymaking has gradually transitioned away from its old practices at the end of the twentieth century. In response to these demands, the Environmental Performance Index (EPI) was established in partnership with the World Economic Forum by academics and policy specialists at Yale University’s Yale Center for Environmental Law and Policy and Columbia University’s Center for International Earth Science Information Network (CIESIN). The EPI (Yale University 2018) was regarded as the global metric for measuring environmental sustainability. EPI provides a snapshot of the environmental performance of 180 countries using 24 performance indicators across 10 problem areas. Countries are graded on a scale of 0–100. Nations with a long history of natural resource conservation, public health protection, and decoupling of greenhouse gas (GHG) emissions from economic activity will receive a high score. On the other hand, nations with low EPI ratings suggest that they need to consider the importance of national sustainability initiatives, particularly in the areas of biodiversity conservation, air quality improvement, and GHG emission reduction. It is worth noting that effective governance emerges as a necessary condition for balancing these disparate facets of sustainability. The EPI brings attention to the areas that need the most attention from policymakers. These indicators also shed light on the best practices of high-performing nations and serve as a guide for countries aspiring to be sustainability leaders. 4.1.2 How Unified Analytics of Sustainability Indicators (Related to EPI and SDGI) Can Inform Education and Policymaking EPI data offered by NASA’s Socioeconomic Data and Applications Center (2018) may be used to facilitate discussions about AI and sustainability-related issues through descriptive analytics and predictive simulations. For a country to accomplish the Sustainable Development Goals (SDGs), a strong EPI score is a significant factor to the achievement of those objectives. Governments are under increasing pressure to defend their performance on sustainability management and pollution control as measured by the EPI and Sustainable Development Goals Index (SDGI). The SDGI exemplifies this dedication by
152
M.-L. How et al.
placing measurements at the center of the policymaking process when defining international objectives and assessing progress toward the SDG goals. The EPI metrics, in combination with the SDGI, serve as a data-driven and empirical approach to environmental preservation, based on rigorous data analytics and statistical analysis. These indicators enable policymakers to analyze trends, identify best practices, highlight policy successes and failures, and maximize the benefits of investments in environmental protection by highlighting policy triumphs and failures. The SDGI is the first global research to analyze how well nations are doing in terms of meeting the SDGs. Produced annually by the Bertelsmann Stiftung and the Sustainable Development Solutions Network (SDSN), the SDGI Report examines 156 countries’ positions in terms of the 17 SDG target items. It also provides guidance on which issues should be prioritized in the SDG targets that are expected to be achieved by 2030 from an environmental perspective. The sources of data came from international organizations (e.g., the World Health Organization, the World Bank, the Food and Agriculture Organization of the United Nations, the International Labour Organization, the United Nations International Children’s Emergency Fund, the Organization for Economic Co-operation and Development), non-governmental organizations (e.g., Oxfam and the Tax Justice Network), household surveys (e.g., the Gallup World Poll), and peer-reviewed journals. The results of the data analysis revealed that nations with strong EPI results also had good rankings on the SDGI, which is a measure of developmental progress. Apart from this, the data revealed a positive relationship between GDP per capita and EPI. Nations with higher GDP per capita achieved higher ranks on the EPI. The process of combining data on environmental performance into composite scores and establishing a worldwide ranking of nations has shown to be effective in influencing policy agendas in many ways. Supporting stronger global data systems, it seems, will be critical to better management of sustainable development concerns in the coming years. With it, we have arrived at a point in time where environmental policymaking may be conducted in a more educated, focused, and successful manner. The EPI is a data-driven and empirical approach to environmental policymaking. It is based on 24 performance indicators across 10 issue categories: air quality and sanitation, water and sanitation, heavy metals, biodiversity and habitat, forests, fisheries, climate and energy, air pollution, water resources, and agriculture. To estimate how far they are from achieving the required environmental objectives, policymakers may compare their EPI score with these measures. To achieve a balance between environmental health and ecosystem vitality, policymakers must be able to recognize trends, potential issues, and best practices and optimize returns from environmental investments. Humanity’s survival depends on the achievement of sustainable development (Griggs et al. 2013). The utilization of primary socio-environmental data for analysis is critical for informing policymakers on sustainable development decision- making, especially in developing countries. Non-computer scientists can also utilize artificial intelligence to assess EPI data relevant to sustainability by employing a
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
153
Fig. 1 Bayesian predictive model for the analysis of global sustainable development
low-code and inclusive approach. It is possible to employ this human-centered approach to probabilistic thinking as a cognitive scaffold to inspire analysts to ask more questions and to provide decision-making support to inform sustainable development policymaking. Environmental health and ecosystem vitality performance indicators are among the metrics included in the 2018 EPI, which includes data from 180 countries (Gupta and Vegelin 2016). The data from the 2018 EPI were subjected to an inclusive and democratized no-code AI-enabled analysis by the first author (see Fig. 1). The results revealed hidden tensions between the two fundamental dimensions of sustainability: (1) environmental health, which improves economic growth and increases affluence, and (2) ecosystem vitality, which deteriorates as a result of industrialization and urbanization.
154
M.-L. How et al.
4.2 Example 2: An Inclusive and Democratized Low-Code Approach of Using AI for Ameliorating Malnutrition The World Health Organization and the World Bank have highlighted malnutrition as one of the world’s most critical but least-addressed sustainability challenges (Briend et al. 2006). Over- and undernutrition can contribute to malnutrition, which is defined as a malfunction or imbalance in the body’s ability to absorb and utilize nutrients (Shrimpton and Rokx 2012). A country’s economic well-being can be negatively impacted by the double burden of malnutrition. The economic costs of malnutrition continue to climb as the burden of malnutrition grows (Delisle 2008). Malnutrition is a public health problem, but it also presents a rare opportunity for coordinated and integrated action on malnutrition in all its forms. The discovery of the “double burden” of malnutrition was significant as a motivator for achieving major global policy and program objectives. In recent decades, diet-related epidemiology has seen major changes because of a changing global nutrition environment affected by globalization, population upheavals, and economic development. The double burden of malnutrition that many nations are now confronting is defined by the coexistence of undernutrition and obesity or diet-related noncommunicable illnesses (Prentice 2018). In many nations, these various forms of malnutrition coexist at the national and family levels, as well as throughout the life span. According to the 2018 Global Nutrition Report (2018), approximately two billion individuals globally are overweight or obese, while another two billion are deficient in micronutrients. Around 38.3 million children under the age of 5 are overweight, 150.8 million are stunted, and another 50.5 million are physiologically wasted away because of malnutrition. For individuals, their families, and nations, the developmental, economic, social, and medical consequences of this worldwide burden of malnutrition are severe and long lasting. Today, roughly one in every three people worldwide suffers from some type of malnutrition, including wasteness, stuntedness, vitamin and mineral deficiencies, overweight or obesity, and diet-related noncommunicable illnesses. Nutrition- related factors account for roughly 45% of mortality in children under the age of 5 (mostly due to malnutrition), and low- and middle-income countries are now seeing a concurrent increase in juvenile obesity (Zhang et al. 2016). Both health and economic prosperity are dependent on nutrition. On the other hand, both malnutrition and obesity-related disorders significantly contribute to the disease burden in these countries. Individuals and communities often face unsustainable economic expenses, which act as a severe impediment to economic and social growth. Malnutrition creates a negative impact on the individual’s health, resulting in higher healthcare expenses and decreased work productivity. This, in turn, has the potential to prolong a cycle of poverty due to bad health. Malnutrition’s double burden therefore has a significant and detrimental economic effect on people and society. As malnutrition’s burden continues to grow, so does its economic cost. While the double burden of malnutrition is a huge public health concern, it also presents an unprecedented opportunity for alignment and collaboration in the fight
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
155
against malnutrition in all of its manifestations. The recognition of the double burden of malnutrition should be seen as a motivator for tackling critical global objectives via policy and program initiatives. Determining the strategy to effective healthcare is critical for evaluating the performance and planning of healthcare delivery. Understanding the relationships between population health data, economic indicators, and access to health services is critical for policymakers in assessing the consequences of evolving healthcare delivery systems. Unmet healthcare requirements may be particularly acute for minority population groups, including children, the elderly, and pregnant women. Studies addressing the association between socioeconomic position, gender disparities in illness incidence, and access to healthcare have historically influenced policymaking (Adler et al. 1993) and will continue to do so. Beyond argument is the critical need for better nutrition, health, and population data to advise policymakers on a wide range of issues pertaining to public health planning, healthcare reform, and healthcare delivery assessments. To contribute toward this noble quest, the World Bank has made the information on malnutrition, healthcare, and demographic statistics freely accessible (World Bank 2019). Even though the World Bank’s dataset contains data from all countries, analysts (e.g., healthcare professionals, policymakers, and researchers) using frequentist approaches that employ null hypothesis significance testing (NHST) may encounter statistical insignificance due to the fact that the data was aggregated by years at the global system level (e.g., there are only 19 rows of data from 2001 to 2018). Analysts may not be able to predict with precision the repercussions, effectiveness, appropriateness, and costs of treatment for specific sectors of the population or for various healthcare delivery and remuneration structures. When this is the case, they are unable to make confident statements regarding the benefit of healthcare investments for population subgroups, regions, or countries. In theory, features in nutrition, health, and population data may be gleaned by ad hoc analysis of a variety of sources, including surveys, illness registries, computerized patient records, and electronic financial transactions for health insurance claims. In reality, however, no one path will provide information pertinent to every research subject. To overcome this challenge, a simple AI-based approach is used to demonstrate how AI can aid in the intuitive application of human-centric probabilistic reasoning to interpret the counterfactual results generated by predictive models. AI-based analytics can provide a rather complete source of information for determining regional health requirements, assessing disease trends, and forecasting healthcare expenditure patterns. This may be accomplished via the use of AI-based analytics to forecast information about healthcare trends, prices, and the efficacy and quality of healthcare services. Additionally, AI-based analytics may help enhance treatment quality by making data accessible to institutions and user groups for use in quality improvement programs and regional health planning. AI-powered analytics may also be beneficial in resolving policy issues and political debates around healthcare reform. Non-computer scientists can employ human-centered, explainable AI to simulate the dynamics between hunger, health, and population indices. Bayesian predictive
156
M.-L. How et al.
modeling can be used to highlight how human-centered probabilistic reasoning can be used to examine the dynamics of global malnutrition and then optimize conditions to attain the best-case scenario (see Fig. 2). The worst-case scenario can also be simulated in the study and can be used to inform stakeholders to take actions to prevent it from occurring. As a result, vulnerable populations could potentially benefit from enhanced policies targeted at improving health and nutrition. Computer scientists are not the only ones who can benefit from AI in the development of predictive models for hunger, health, and population data. Using this method, different factors can be held constant while other variables can be adjusted to envision
Fig. 2 Bayesian predictive model for ameliorating global malnutrition
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
157
what-if scenarios and to forecast at-risk conditions. Preventive measures to avoid the worst-case scenario could be taken by policymakers as pre-emptive solutions.
4.3 Example 3: An Inclusive and Democratized Low-Code Approach of Using AI for Financial Inclusion According to the World Bank (2020), financial inclusion is critical since it reduces poverty and boosts economic development. Financial inclusion is especially important in developing countries in Asia, Africa, and South America. In these regions, the people who are most in need of financial inclusion, such as low-income families and small businesses, have little or no access to banking and financial services. Financial services are widely available, but certain segments of consumers may not be aware or educated on using them. Therefore, one of the goals of financial inclusion is to make basic financial products and services, such as money transfers, remittances, deposits, loans, and insurance, accessible and affordable to poor people and small and micro businesses. More affordable access to formal financial services for these groups may enhance their overall well-being and boost economic development too. AI can play a critical role in helping FSPs better understand the expectations of prospective customers as the need for inclusive financial solutions grows (Fan and Zhang 2017). AI solutions should help FSPs to place the emphasis on customers, rather than on the product or the FSPs’ profits. Leveraging the speed and accuracy with which AI can analyze data, FSPs could better understand the needs of their potential consumers. Take low-income families as an example. It is imperative that they have access to affordable finance, as financial inclusion was found to empower low-income families to better withstand the effects of a downturn in the economy (Yin et al. 2019). Due to the absence of collaterals from the poor, FSPs may provide loans or provide other financial services to micro-enterprises and the low-income groups that regular banks cannot support. FSPs can help by granting these families access to financial credit so that they can be better prepared for unexpected events. Individuals who are previously neglected by financial institutions may benefit from increased investments on education and healthcare to enjoy a higher quality of life. Small companies might benefit from easier access to financial services like micro loans and insurance. To better serve their consumers, FSPs may employ AI to evaluate customer data, generating projections that offer suggestions to prospective consumers with recommendations to new products and ideas (see Fig. 3). Through the use of technologies that allow AI and financial analysts to work together to better understand their customers and their habits, FSPs may even find new market possibilities. Building on the FSPs’ deep understanding of existing customer segmentation, it might lead to more inquiries and theories for future human-inspired AI explorations for novel ways to serve the customers better. The willingness and readiness of FSPs to build preconceptions about how effective it should be and how successful it may be is a
158
M.-L. How et al.
Fig. 3 Predictive model for advancing financial inclusion and reducing poverty reduction
crucial component in the future. For example, new data patterns, unexplored channel possibilities, or new target audiences may be discovered using AI in the data analysis process. It is possible for human analysts to raise questions that lead to adjustments in new marketing tactics that better serve prospective customers when AI can discover incremental efficiencies. The FSPs that are willing to deliver creative solutions to meet customer needs should be encouraged to take on a broader degree of corporate change. This means that FSPs with organization cultures that embrace change and are willing to invest in their employees and technology will be more likely to empower their teams to be future-ready and have a higher tendency to lead their businesses toward successful transformations. As a positive consequence, FSPs and prospective customers may both benefit from the usage of AI. Nevertheless, despite AI’s potential, humans must take the lead when it comes to its implementation, rather than letting the technology dictate their actions.
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
159
4.4 Example 4: An Inclusive and Democratized Low-Code Approach of Using AI for Improving Food Security Some of the most critical concerns about sustainability is ensuring food security for a projected population of over nine billion by 2050 as well as mitigating further ecological damage (UN-DESA 2015). Additionally, food consumption habits are shifting fast in tandem with an increase in affluence, especially in most of the world’s emerging middle class (Vranken et al. 2014). In view of this emerging trend, it is necessary to comprehend the concept of natural resource scarcity, uncertain agricultural economics, and substantial technological and sociocultural developments such as diet “Westernization” or climate change. The majority of the world’s food systems are precarious and susceptible to weather change. According to the United Nations Food and Agriculture Organization (FAO), about a billion people suffer from the lack of calories and more than two billion lack sufficient nutrients (Food and Agriculture Organization of the United States, International Fund for Agricultural Development, and World Food Programme 2014). Even though two billion people are overweight or obese, many of them continue to suffer from nutritional deficiencies or imbalances (World Health Organization 2014). Global population growth is just one aspect of the issue. Food preferences have changed over the years. Most notably, there has been an increase in the demand for animal products. This may potentially have negative impacts on environmental health (Kharas 2010). Apart from health concerns, industrialized food systems may contribute to climate change through greenhouse gas emissions and threaten biodiversity and food security (Ingram 2011; Tilman and Clark 2014). Crop output is impacted by climate change (Lobell et al. 2011). Freshwater supplies have been depleted in various parts of the globe, mostly as a result of irrigation overuse (Elliott and Elliott 2010). Increased frequency and severity of harsh weather events, particularly floods and droughts, may have negative impacts, not only on crop production but also on food storage, delivery, and safety (Miraglia et al. 2009). These variables will also have an effect on the cost of food. In light of the different demands on food systems, the global food system’s existing and possibly prospective difficulties must be addressed. The challenge is to offer enough nutrition while minimizing environmental degradation without damaging the ecosystems that support the livelihoods of the farmers. Climate and other environmental elements affecting agricultural systems may have a considerable impact, eroding the natural resources upon which our food security is based. While improvements in a variety of disciplines such as by increasing yields, animal feed output, aquaculture production, and labor productivity have contributed to addressing global food security, they may have a detrimental influence on the environment. New policy initiatives must be introduced to mitigate environmental impacts while improving health outcomes and sustaining the food systems’ businesses and livelihoods. To effect change in food systems, dialogue and collaborations between all
160
M.-L. How et al.
players in the food system, including policymakers, farmers, processors, retailers, and consumers, are essential. With few resources available and the looming climate crisis, overcoming food security risks would need a paradigm shift in thinking. Rather than viewing nations as separate food producers, we must analyze the dynamics affecting the global food system’s security. At the local, national, and global levels, dialogues and cooperation with participants in the food system, including growers, manufacturers, distributors, and others, are necessary. While collaborative efforts between businesses and individuals are critical, prospective methods should prioritize finding synergies between climate change and environmental concerns, albeit with inescapable trade- offs that need careful management. To facilitate the shift from business as usual to achieving greater food security, holistic techniques might be used. One of the major issues of the twenty-first century is the threat to food security posed by climate change. Scientific investigations have shown that climate change has a detrimental influence on food security (Lobell et al. 2011). Although gradual changes such as rising temperatures and sea levels will have a substantial impact over the next several decades, farmers must contend with changing weather conditions and the growing frequency and intensity of severe weather events (IPCC 2012). Unpredictability is perhaps the most pernicious concern of climate change. It is extremely difficult to make weather forecast to high precision, not even for the next season. As pressures on the food system increase, food security is becoming a more pressing issue worldwide. A probabilistic technique based on human-centric artificial intelligence may be utilized to develop a prediction model based on quantitative and qualitative data from the Global Food Security Index (GFSI). People who are not completely schooled in mathematics or computer science will benefit from this basic probabilistic technique because of its inherent simplicity. Predictive modeling of the GFSI dataset can be utilized to identify the link between food cost, food availability, food quality and safety, and the resilience of natural resources in an AI-based method (see Fig. 4). Computer simulations may be utilized to provide predictions of favorable and unfavorable environmental circumstances. These future scenarios are important for informing policymakers and stakeholders from a variety of domain verticals, allowing them to make choices that are beneficial to global food security. The predictive model illustrates that increased global food security is complicated by an uncertain supply of new crops and animal illnesses, as well as unforeseeable economic, political, climatological, and biological developments. A part of this could be due to the demand for agricultural products by wealthy countries. Here, the focus is on the application of agricultural production, notably in terms of environmental sustainability, economic viability, market involvement, and social conscience. Additionally, the findings are relevant to the industrial sector, namely, the agri-food sector in industrialized nations. This model’s findings can potentially pique the interest of professionals in fields such as consumer behavior, traditional food consumption, economics, sustainable agricultural production, and agricultural productivity.
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
161
Fig. 4 Predictive analytics of how simulated changes in the multiple parameters could influence global food security
Agricultural and rural computerization is critical for agriculture’s progress; there are currently several approaches for developing models in agricultural information systems. At the European Union (EU) level, the European Commission is urging member states to seize the opportunities presented by emerging technology and digitalization in agriculture in order to increase the sector’s production and profitability while simplifying farmers’ daily tasks. Food security on a global scale continues to be a challenge. Crop yields have decreased in a number of locations because of water shortages. While agroecological techniques may boost yields, increasing investment and policy changes have the potential to considerably improve food security in underdeveloped countries (Rosegrant 2003). Climate change’s impacts on agricultural output have the potential to have a negative impact on food security. Unpredictability in short-term supply may jeopardize the sustainability of whole food systems. Food insecurity is anticipated to be exacerbated by climate instability and weather changes. As a result, considerable mitigating efforts in favor of a climate-smart food system are required (Wheeler and von Braun 2013). To develop more secure, future-proof food protection technologies, food researchers and policymakers who are not computer scientists may use AI to support decision-making in order to establish more of such food security systems. The approach proffered here considerably adds to the existing body of knowledge by presenting a user-friendly strategy for democratizing AI adoption. It enables inexperienced users of AI to conduct research analyses utilizing probabilistic reasoning. Using this strategy in computer models, controlled investigations may be conducted.
162
M.-L. How et al.
Certain variables may be left unchanged, while others can be altered to mimic an infinite number of possible situations. As a result, it is feasible to simulate what-if situations. This permits predictive inferences about the circumstances necessary to optimize positive results and forecasts about the conditions necessary to avoid unfavorable outcomes in global food security from ever arising.
5 Conclusion AI proponents may opt to employ simulations of different variable combinations to replicate in silico what could not be readily performed in the actual world via predictive analyses. However, writing code for AI algorithms or software development was not always simple. By using the low-code/no-code AI-based approach described in this chapter, it is possible to simulate numerous scenarios to determine the conditions for the best and worst outcomes of various issues, such as sustainable development, alleviation of malnutrition, and the advancement of financial inclusion to reduce poverty. Learning about socially beneficial problems may help people of all ages comprehend that these issues are global, not localized. It is also an opportunity to persuade them to advocate more peaceful, tolerant, and inclusive behaviors actively via the use of conceptual abstractions, problem-solving heuristics, and data analysis. Analysts using AI for social good might modify the examples in this article using their own data at the local, national, or global level utilizing user-friendly software programs like BayesiaLab (developed by Bayesia), GeNIe (developed by BayesFusion), Netica (developed by Norsys), or Bayes Server (developed by BayesFusion). As shown in this study, AI may be democratized by making it available to non-computer-science analysts who are interested in applying it to their own work. More data explorations with AI and more human-centric insights for guiding policymakers are possible with AI-augmented thinking. And here is where this discussion will be closed – not with finality, but as a nod to the profundity of global development sustainability issues that affect us all.
References Adler, N.E., T. Boyce, and M.A. Chesney. 1993. Socioeconomic Inequalities in Health: No Easy Solution. Journal of the American Medical Association 269: 3140–3145. Bayes, Thomas. 1763. A Letter from the Late Reverend Mr. Thomas Bayes, F. R. S. to John Canton, M. A. and F. R. S. In The Royal Society, Philosophical Transactions (1683–1775), vol. 53, 269–271. London: The Royal Society Publishing. https://www.jstor.org/stable/105732. Briend, André, Claudine Prudhon, Zita Weise Prinzo, Bernadette M.E.G. Daelmans, and John B. Mason. 2006. Putting the Management of Severe Malnutrition Back on the International Health Agenda. Food and Nutrition Bulletin 27 (3_suppl3): S3–S6. https://doi.org/10.117 7/15648265060273S301.
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
163
Chang, Young-Hyun, and Chang-Bae Ko. 2017. A Study on the Design of Low-Code and No Code Platform for Mobile Application Development. International Journal of Advanced Smart Convergence 6 (4): 50–55. https://doi.org/10.7236/IJASC.2017.6.4.7. Correa, M., C. Bielza, and J. Pamies-Teixeira. 2009. Comparison of Bayesian Networks and Artificial Neural Networks for Quality Detection in a Machining Process. Expert Systems with Applications 36 (3): 7270–7279. https://doi.org/10.1016/j.eswa.2008.09.024. Cowell, R.G., A.P. Dawid, S.L. Lauritzen, and D.J. Spieglehalter. 1999. Probabilistic Networks and Expert Systems: Exact Computational Methods for Bayesian Networks. New York: Springer. Delisle, Hélène F. 2008. Poverty: The Double Burden of Malnutrition in Mothers and the Intergenerational Impact. Annals of the New York Academy of Sciences 1136 (1): 172–184. https://doi.org/10.1196/annals.1425.026. Elliott, J.M., and J.A. Elliott. 2010. Temperature Requirements of Atlantic Salmon Salmo Salar, Brown Trout Salmo Trutta and Arctic Charr Salvelinus Alpinus: Predicting the Effects of Climate Change. Journal of Fish Biology 77 (8): 1793–1817. https://doi.org/10.1111/j.1095-86 49.2010.02762.x. Fan, Zhaobin, and Ruohan Zhang. 2017. Financial Inclusion, Entry Barriers, and Entrepreneurship: Evidence from China. Sustainability 9 (2): 203. https://doi.org/10.3390/su9020203. Floridi, Luciano, Josh Cowls, Thomas C. King, and Mariarosaria Taddeo. 2020. How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics 26 (3): 1771–1796. https://doi.org/10.1007/s11948-020-00213-5. Food and Agriculture Organization of the United States, International Fund for Agricultural Development, and World Food Programme. 2014. The State of Food Insecurity in the World 2014. Strengthening the Enabling Environment for Food Security and Nutrition. http://www. fao.org/publications/sofi/2014/en/. ‘Global Nutrition Report, 2018’. 2018. Geneva: World Health Organization. https://globalnutritionreport.org/reports/global-nutrition-report-2018/. Griggs, David, Mark Stafford-Smith, Owen Gaffney, Johan Rockström, Marcus C. Öhman, Priya Shyamsundar, Will Steffen, Gisbert Glaser, Norichika Kanie, and Ian Noble. 2013. Sustainable Development Goals for People and Planet. Nature 495 (7441): 305–307. https:// doi.org/10.1038/495305a. Gupta, Joyeeta, and Courtney Vegelin. 2016. Sustainable Development Goals and Inclusive Development. International Environmental Agreements: Politics, Law and Economics 16 (3): 433–448. https://doi.org/10.1007/s10784-016-9323-z. How, Meng-Leong. 2019. Future-Ready Strategic Oversight of Multiple Artificial Superintelligence- Enabled Adaptive Learning Systems via Human-Centric Explainable AI-Empowered Predictive Optimizations of Educational Outcomes. Big Data and Cognitive Computing 3 (3): 46. https:// doi.org/10.3390/bdcc3030046. How, Meng-Leong, and Yong Jiet Chan. 2020. Artificial Intelligence-Enabled Predictive Insights for Ameliorating Global Malnutrition: A Human-Centric AI-Thinking Approach. AI 1 (1): 68–91. https://doi.org/10.3390/ai1010004. How, Meng-Leong, and Wei Loong David Hung. 2019. Harnessing Entropy via Predictive Analytics to Optimize Outcomes in the Pedagogical System: An Artificial Intelligence-Based Bayesian Networks Approach. Education Sciences 9 (2): 158. https://doi.org/10.3390/educsci9020158. How, Meng-Leong, Yong Jiet Chan, and Sin-Mei Cheah. 2020a. Predictive Insights for Improving the Resilience of Global Food Security Using Artificial Intelligence. Sustainability 12 (15): 6272. https://doi.org/10.3390/su12156272. How, Meng-Leong, Sin-Mei Cheah, Yong-Jiet Chan, Aik Cheow Khor, and Eunice Mei Ping Say. 2020b. Artificial Intelligence-Enhanced Decision Support for Informing Global Sustainable Development: A Human-Centric AI-Thinking Approach. Information 11 (1): 39. https://doi. org/10.3390/info11010039. How, Meng-Leong, Sin-Mei Cheah, Aik Cheow Khor, and Yong Jiet Chan. 2020c. Artificial Intelligence-Enhanced Predictive Insights for Advancing Financial Inclusion: A H uman-Centric
164
M.-L. How et al.
AI-Thinking Approach. Big Data and Cognitive Computing 4 (2): 8. https://doi.org/10.3390/ bdcc4020008. Ingram, John. 2011. A Food Systems Approach to Researching Food Security and Its Interactions with Global Environmental Change. Food Security 3 (4): 417–431. https://doi.org/10.1007/ s12571-011-0149-9. IPCC. 2012. Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. In A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change, 582. Cambridge/New York: Cambridge University Press. https://www.ipcc. ch/pdf/special-reports/srex/SREX_Full_Report.pdf. Jensen, F.V. 1999. An Introduction to Bayesian Networks. New York: Springer. Kaplan, David. 2016. Causal Inference with Large-Scale Assessments in Education from a Bayesian Perspective: A Review and Synthesis. Large-Scale Assessments in Education 4 (1): 7. https://doi.org/10.1186/s40536-016-0022-6. Kharas, H. 2010. The Emerging Middle Class in Developing Countries. OECD Report No. 1815–1949. Korb, K.B., and A.E. Nicholson. 2010. Bayesian Artificial Intelligence. London: Chapman & Hall/CRC. Levy, Roy. 2016. Advances in Bayesian Modeling in Educational Research. Educational Psychologist 51 (3–4): 368–380. https://doi.org/10.1080/00461520.2016.1207540. Lobell, D.B., W. Schlenker, and J. Costa-Roberts. 2011. Climate Trends and Global Crop Production Since 1980. Science 333 (6042): 616–620. https://doi.org/10.1126/science.1204531. Miraglia, M., H.J.P. Marvin, G.A. Kleter, P. Battilani, C. Brera, E. Coni, F. Cubadda, et al. 2009. Climate Change and Food Safety: An Emerging Issue with Special Focus on Europe. Food and Chemical Toxicology 47 (5): 1009–1021. https://doi.org/10.1016/j.fct.2009.02.005. Myers, Raymond H., Douglas C. Montgomery, and Christine M. Anderson-Cook. 2009. Response Surface Methodology: Process and Product Optimization Using Designed Experiments. 3rd ed. Somerset: Wiley. Pigozzi, Mary Joy. 2006. A UNESCO View of Global Citizenship Education. Educational Review 58 (1): 1–4. https://doi.org/10.1080/00131910500352473. Prentice, A.M. 2018. The Double Burden of Malnutrition in Countries Passing Through the Economic Transition. Annals of Nutrition & Metabolism 72 (suppl 3): 47–54. https://doi. org/10.1159/000487383. Rosegrant, M.W. 2003. Global Food Security: Challenges and Policies. Science 302 (5652): 1917–1919. https://doi.org/10.1126/science.1092958. Shannon, Claude Elwood. 1953. The Lattice Theory of Information. IRE Professional Group on Information Theory 1 (1): 105–107. https://doi.org/10.1109/TIT.1953.1188572. Shrimpton, Roger, and Claudia Rokx. 2012. The Double Burden of Malnutrition: A Review of Global Evidence. Washington, DC: World Bank. https://doi.org/10.1596/27417. Socioeconomic Data and Applications Center (sedac). 2018. Environmental Performance Index, 2018 Release. https://sedac.ciesin.columbia.edu/data/set/ epi-environmental-performance-index-2018/data-download. Sperotto, Anna, Josè Luis Molina, Silvia Torresan, Andrea Critto, Manuel Pulido-Velazquez, and Antonio Marcomini. 2019. Water Quality Sustainability Evaluation Under Uncertainty: A Multi-Scenario Analysis Based on Bayesian Networks. Sustainability 11 (17): 4764. https:// doi.org/10.3390/su11174764. Taddeo, Mariarosaria, and Luciano Floridi. 2018. How AI Can Be a Force for Good. Science 361 (6404): 751–752. https://doi.org/10.1126/science.aat5991. The World Bank. 2020. Financial Inclusion. https://www.worldbank.org/en/topic/ financialinclusion/overview. Tilman, David, and Michael Clark. 2014. Global Diets Link Environmental Sustainability and Human Health. Nature 515 (7528): 518–522. https://doi.org/10.1038/nature13959. Tsamardinos, Ioannis, Constantin F. Aliferis, and Alexander Statnikov. 2003. Time and Sample Efficient Discovery of Markov Blankets and Direct Causal Relations. In Ninth ACM SIGKDD
Artificial Intelligence for Advancing Sustainable Development Goals (SDGs)…
165
International Conference on Knowledge Discovery and Data Mining – KDD ’03, 673. https:// doi.org/10.1145/956750.956838. UN-DESA. 2015. World Population Prospects: The 2015 Revision, Key Findings and Advance Tables. http://esa.un.org/unpd/wpp/publications/files/key_findings_wpp_2015.pdf. van de Schoot, Rens, David Kaplan, Jaap Denissen, Jens B. Asendorpf, Franz J. Neyer, and Marcel A.G. van Aken. 2014. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research. Child Development 85 (3): 842–860. https://doi.org/10.1111/ cdev.12169. Vranken, Liesbet, Tessa Avermaete, Dimitrios Petalios, and Erik Mathijs. 2014. Curbing Global Meat Consumption: Emerging Evidence of a Second Nutrition Transition. Environmental Science & Policy 39 (May): 95–106. https://doi.org/10.1016/j.envsci.2014.02.009. Wheeler, T., and J. von Braun. 2013. Climate Change Impacts on Global Food Security. Science 341 (6145): 508–513. https://doi.org/10.1126/science.1239402. World Bank. 2019. Health Nutrition and Population Statistics. https://datacatalog.worldbank.org/ dataset/health-nutrition-and-population-statistics. World Health Organization. 2014. Countries Vow to Combat Malnutrition Through Firm Policies and Actions. http://www.who.int/mediacentre/news/releases/2014/icn2-nutrition/en/. Yale University. 2018. Full Dataset of the Environment Performance Index. https://sedac.ciesin. columbia.edu/data/set/epi-environmental-performance-index-2018/data-download. Yin, Xuluo, Xu Xuan, Qi Chen, and Jiangang Peng. 2019. The Sustainable Development of Financial Inclusion: How Can Monetary Policy and Economic Fundamental Interact with It Effectively? Sustainability 11 (9): 2524. https://doi.org/10.3390/su11092524. Zhang, N., L. Bécares, and T. Chandola. 2016. Patterns and Determinants of Double-Burden of Malnutrition Among Rural Children: Evidence from China. PLoS One 11 (7): e0158119. https://doi.org/10.1371/journal.pone.0158119.
Ethical AI: The European Approach to Achieving the SDGs Through AI Valeria Benedetti del Rio
Abstract The European legislative framework for the development and regulation of artificial intelligence (AI) is beginning to take shape: the European Commission published on April 21st, 2021, a proposal for a regulation laying down harmonised rules on artificial intelligence, and various position papers and non-legislative acts of other European bodies are paving the way for the EU to take the lead in the development of a legislative framework for AI. As the European Commission clearly summarised, AI should be “a tool for people and be a force for good in society with the ultimate aim of increasing human well-being”. Although not a member of the United Nations, the EU takes part in its activities and shares the commitments of the “2030 Agenda for Sustainable Development”. Indeed, the EU works towards the achievement of the 17 Sustainable Development Goals (SDGs) both at Union and at member states’ level. It is therefore to be expected (as well as desirable) that the new legislative framework will be supportive of the achievement of the SDGs. This paper will describe the required characteristics of AI according to existing European legislative and non-legislative tools and will analyse which elements contribute to the achievement of SDGs and which aspects can, instead, hinder their full completion. Attention will be given to aspects such as the auditability of AI reasoning, equity of potential outcomes, human-centricity and the protection of human rights. Keywords Artificial intelligence · Sustainable Development Goals The European legislative framework for the development and regulation of artificial intelligence (AI) is beginning to take shape: the European Commission published on April 21, 2021, a proposal for a regulation laying down harmonised rules on
V. Benedetti del Rio (*) Baker McKenzie, Milan, Italy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_10
167
168
V. Benedetti del Rio
artificial intelligence (the AI Regulation Proposal),1 as part of a larger and more complex AI Strategy2 directed at streamlining research as well as policy options for AI regulation. The AI Regulation Proposal is the latest piece of a series of legislative and non- legislative papers encompassing the growing interest and need, at European Union (hereafter also EU) level, to identify rules and principles governing the development and use of AI. The European Union’s aim, indeed, is that of fostering the development of a technology that is considered to be disruptive of our ways of living while at the same time ensuring that the technological advancements do not undermine the conquests of the EU and of its citizens, notably with respect to protection of rights and freedoms of people within the EU. On the other hand, and from a preeminently political point of view, the AI Regulation Proposal is deemed by others as “an attempt by Brussels to influence the development of AI technology”,3 a way in which the European Union can influence the direction of technological innovation within its borders as well as in other regions of the world. In this paper, we will discuss the characteristics of AI according to the recent legislative developments in Europe, and a specific focus will be drawn to the analysis of the elements that can contribute to the achievement of the SDGs or, on the contrary, hinder their full completion.
1 AI Legal Framework in Europe: How Did Europe Get Here The current AI Regulation Proposal represents a central piece in the development of AI in the European Union. While, as a proposal, it is expected to go through a period of redrafting and changes as part of the standard legislative process in the EU, the AI Regulation Proposal is interesting and worth analysing already in its current first draft form. Indeed, it builds on numerous previous acts and opinions, some of which will be analysed later on, and it encompasses, together with the Communication on Fostering a European Approach to Artificial Intelligence4 and the Coordinated Plan
Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, available at: EUR-Lex - 52021PC0206 - EN - EUR-Lex (europa.eu), last accessed in October 2021. 2 Further information on the EU strategy for AI is available at: https://digital-strategy.ec.europa.eu/ en/policies/european-approach-artificial-intelligence, last accessed in October 2021. 3 The EU’s approach to artificial intelligence, The International Institute for Strategic Studies, also available at: the-eus-approach-to-artificial-intelligence.pdf (iiss.org), last accessed in October 2021. 4 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – Fostering a European approach to Artificial Intelligence, available at: https://eur-lex.europa.eu/legal-content/EN/ ALL/?uri=COM:2021:205:FIN, last accessed in October 2021. 1
Ethical AI: The European Approach to Achieving the SDGs Through AI
169
with Member States,5 the AI package developed by the European Commission with the aim of turning the European Commission’s EU digital single market strategy into practice. In the following paragraphs, we will discuss the main steps that influenced the content of and led the European Commission to the publication of the AI Regulation Proposal. It is not possible to analyse the AI Regulation Proposal without considering one of the earliest documents laying down the need to regulate the growing industry of AI at EU level, that is, the European Commission Communication on the AI Strategy for Europe, in April 25, 2018 (AI Strategy).6 As defined by the European Commission, AI systems are “systems that show intelligent behaviour by analysing their environment, and performing various tasks with some degree of autonomy to achieve specific goals”.7 In its AI Strategy, the European Commission explains that AI, although already present in people’s life, will represent an immense opportunity in the near future, in terms of market value and jobs, as well as in its ability to disrupt the internal economic market. Therefore, the European Commission issued the mentioned AI Strategy, whose objective is that of preparing for the future to come. The AI Strategy is based on three specific pillars: (i) the opportunity for the EU and its companies to be ahead of technological developments, thus encouraging investments and uptake by the public and private sectors; (ii) the need to prepare for socioeconomic changes brought about by AI, which in turn leads to the need to modernise education and the labour market in order to prepare for the technological changes; and (iii) the need to ensure that AI development happens within the boundaries of an appropriate ethical and legal framework – which the European Commission is now providing through its AI Regulation Proposal. In the brief description of the most influential steps that led to the development of a legal framework for AI in Europe, a specific mention must be reserved to the Ethics Guidelines for Trustworthy AI (Guidelines),8 issued by the High-Level Expert Group on Artificial Intelligence (AIHLEG) on April 8, 2019, where the fundamental desirable attributes for AI are depicted. As indicated in the Guidelines, AI should be lawful, ethical and robust, in order for it to be trustworthy and, as such, be developed, deployed and used lawfully within the European Union. Consequently, not all AI applications are worthy of development and deployment in the EU, but only those that, by balancing the technological advancements to the European
The Coordinated Plan on Artificial Intelligence 2021 Review is available at: https://digital- strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review, last accessed in October 2021. 6 Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions – Artificial Intelligence for Europe, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?u ri=COM%3A2018%3A237%3AFIN, last accessed in October 2021. 7 See Artificial Intelligence for Europe Factsheet, available at: http://ec.europa.eu/newsroom/dae/ document.cfm?doc_id=51610, last accessed in September 2021. 8 Ethics guidelines for trustworthy AI, available at: https://digital-strategy.ec.europa.eu/en/library/ ethics-guidelines-trustworthy-ai, last accessed in October 2021. 5
170
V. Benedetti del Rio
values, ensure that AI is trustworthy, therefore sufficiently safe for EU citizens and companies to benefit from. In addition, to ensure that the characteristics of trustworthy AI can be effectively understood and applied to real cases, the AIHLEG included a framework within the Guidelines, translating the principles for trustworthy AI into key requirements and practical examples. These are aimed at offering direction on how to operationalise the principles of trustworthy AI encompassed in the Guidelines. Finally, we move to the AI Regulation Proposal, and we analyse briefly its characteristics. The AI Regulation Proposal was published by the European Commission on April 21, 2021, and is composed of 85 articles, and it describes the specific sets of rules that apply to the development and placement on the market of AI systems, according to their level of risk. According to the proposal, in fact, there may be four different levels of risks of an AI system: unacceptable, high, limited and minimal risk. AI that poses an unacceptable risk, by being a threat to people’s rights, their safety or livelihood, is unlawful according to the proposal; therefore, any application or use is forbidden. High-risk AI instead is not only lawful according to the AI Regulation Proposal but can also be employed in a number of areas, such as education, employment, law enforcement and administration of justice; these technologies shall be subject to strict obligations before they are placed on the market, as will be further explained below. Moving to the description of limited risk AI, these AI systems are those that may pose risks of manipulation of individuals, for example, because they interact with people, or are used to detect their emotions or generate content fed to users (such as deep fakes): these technologies are subject to specific transparency obligations according to the AI Regulation Proposal. Lastly, minimal risk AI is represented by examples, such as AI-enabled video games or spam filters; this type of technology poses minimal or no risk to rights and freedoms of individuals and therefore can be freely used and allowed. It is worth mentioning that the EU effort benefits from discussions that have been happening globally, as a number of position papers and non-legislative acts were issued in the past years, fostering debates and paving the way for the European Union to take the lead in the development of a legislative framework for the deployment of AI.9
Among these, the White Paper of the European Commission On Artificial Intelligence – A European approach to excellence and trust, available at: https://ec.europa.eu/info/sites/default/ files/commission-white-paper-artificial-intelligence-feb2020_en.pdf, last accessed in October 2021; the Report on AI for Good, Global Summit, held in Geneva in 2017, available at: https:// www.itu.int/en/ITU-T/AI/Documents/Report/AI_for_Good_Global_Summit_Report_2017.pdf, last accessed in October 2021. 9
Ethical AI: The European Approach to Achieving the SDGs Through AI
171
2 The Sustainable Development Goals and Their Connections with the Development of the AI Legal Framework As the European Commission clearly summarised, AI should be “a tool for people and be a force for good in society with the ultimate aim of increasing human well-being”.10 Although the European Union is not a member of the United Nations, it takes part in its activities and shares the commitments of the “2030 Agenda for Sustainable Development”, adopted in 2015 by the United Nations. The 2030 Agenda was drafted as a guideline for countries to follow to ensure peace and prosperity for people and for the planet. It is a plan intended to accompany international actions for the subsequent 15 years, and it showcases very ambitious objectives. At its hearth are the 17 Sustainable Development Goals (SDGs) to be considered as urgent calls to actions, an endeavour that all signatory countries agreed to bring forward. Each SDG is further divided into multiple targets, all of which help reach the desired goal. The targets, 169 in total, also play an operational role, helping countries and companies find a clearer path to follow for the achievement of the SDGs. The European Union itself works towards the achievement of the 17 SDGs, both at Union and at member state level. An example is the so-called “Fit for 55 package”, a plan for the European Union to cut its carbon emissions and enact its transition to green energy.11 Considering all of the above, it is therefore to be expected, and desirable, that the legislative framework proposed by the European Union for AI will be supportive of the achievement of the SDGs. In the following paragraphs, we will analyse why AI should be considered an important tool for the achievement of the SDGs and how it can impact their accomplishment. The SDGs range across multiple areas of action, from eradicating poverty and hunger to ensuring clean energy and developing sustainable cities. The global participation and effort that shall be put forward to fulfil the goals is representative of how challenging the objectives of the United Nations are. And there is more to it, because governments and companies must also consider that the 17 goals are strongly intertwined: ending poverty or combating global hunger can only be achieved when strategies to improve health conditions or access to education and equal opportunities are also put in place. the above, while also tackling the climate crisis and ensuring environmental protection and social justice to people in the world. With the ambitious objectives ahead and the limited time to reach them (as only one decade is left till the 2030 deadline), it is clear that governments should and must use all the tools at their disposal, and AI is one of them. AI, in fact, is seen by some as the most powerful accelerator of SDGs; in the following paragraphs, we AI Regulation Proposal, page 1. Further information is available at: https://www.consilium.europa.eu/en/policies/eu-plan-for-agreen-transition/, last accessed in October 2021. 10 11
172
V. Benedetti del Rio
will describe the main characteristics of AI, in order to understand where the potential of AI in achieving the SDGs lies. AI, broadly speaking, can be defined as any technology that is capable of perceiving, predicting, taking decisions, applying logic reasoning and recognising patterns, when applied to a specific situation or issue. It is precisely thanks to these powerful characteristics that it is seen as a meaningful tool in contributing to achieving or accelerating the achievement of the SDGs. As an example, the ability to identify historical patterns and predict highly variable outcomes can be applied to temperature and weather trends, resulting in the possibility to predict extreme weather events related to the climate crisis that many regions are experiencing, thus reducing the vulnerability of impacted communities and ultimately reducing the impacts on poverty within these countries and villages.12 Another example is the possibility of using AI to analyse numerous information related to the global food chain to help diminish food waste, currently impacting ca. 30% of global food production,13 by improving distribution and ultimately reducing hunger. Indeed, nowadays, technology is used in fridges to indicate the items that are approaching their expiry date in order to prevent household waste or to prepare automatic grocery lists when the last piece of product has been used; however, the applications of AI to the food chain include the possibility to forecast demand and modulate production accordingly or quickly source new suppliers in case of shortages.14 Another capability of AI is that of analysing vast amounts of data and driving to solutions or logic conclusions: these could inform decision makers and result in an improvement of social justice. The application of logic reasoning to certain issues, indeed, could help reduce inequalities in our societies, e.g. with respect to gender discrimination or accessing education. Clearly, these are just a few examples of a long list of possibilities. It is therefore evident that AI is an incredible tool for humanity in relation to the achievement of the SDGs and, as such, must be put to good use. Indeed, there are three main characteristics of AI that make this technology a unique tool in tackling the most pressing issues of our time and ultimately reaching the SDGs: (i) AI is supported by computer systems that can deploy huge computational capacity, an asset that can also increase in scale – this means that AI can analyse an enormous amount of information in a previously unthinkable time frame. (ii) Given a set of information, AI can identify patterns and infer additional Climavision, a weather forecasting service, is using radar technology, GPS technology and a proprietary software to improve the timing and accuracy of weather forecasting. Further information is available at: https://therisefund.com/news/rise-fund-announces-100-million-strategicinvestment-climavision, last accessed in September 2021. In Japan, instead, technology is used to provide citizens with timing alerts for natural disasters, like earthquakes. Further information is available at: https://news.trust.org/item/20210308082452-utr0s/, last accessed in September 2021. 13 Food and Agriculture Organization of the United Nations Report on The State of Food and Agriculture, 2019, available at: http://www.fao.org/state-of-food-agriculture/2019/en/, last accessed in October 2021. 14 Further information is available at: https://www.forbes.com/sites/curtmueller/2021/08/09/ supply-chain-ai-a-food-additive-that-wont-harm-our-health/, last accessed in September 2021. 12
Ethical AI: The European Approach to Achieving the SDGs Through AI
173
facts – this logical reasoning is indeed a fundamental characteristic of AI in its applications towards the achievement of SDGs. (iii) Lastly, AI can foster additional innovation and technological advancements: in fact, results of AI reasoning can be considered as the starting point for further AI reasoning, to help solve issues that we are not yet able to solve. Clearly, central to all of the characteristics identified above is the capability of AI to process information, and notably large quantities thereof. Data is at the basis of any AI application, and attention shall therefore focus on the types and characteristics of the data that are provided to AI as well as on the ways in which AI processes data – these aspects will be further explained in the paragraphs below, where we move to also considering the disadvantages that AI puts forward, as a technological tool. As mentioned above, the characteristic of AI that differentiate it from the other available technologies is that of inferring new facts or conclusions from a given set of data, by way of identifying existing patterns within the information provided and applying them into other collections of information and data. This is the result of the application of machine learning algorithms to AI. Machine learning algorithms, in fact, are a type of computer algorithm able to improve and learn autonomously through experience and data – programmers do not determine the conclusions to which these types of software should get, as in other computer software based on an input-output logic. With machine learning, it is the software itself that, by applying a pre-determined model, will get to a conclusion that is unknown to the developers. The application of machine learning algorithm is at the core of AI systems, and it is the basis for their ability to learn and make predictions. However, machine learning algorithm must be trained to develop their learning model, and to do so, they must be fed with so-called training data. As we mentioned above, data is central to any AI application and, necessarily, training data are of paramount importance in the process of developing AI. The conclusions that AI may be able to reach depend on the type and characteristics of the training data that are provided to the machine learning algorithm, and different sets of data applied to the same machine learning algorithm may result in different outputs. In order to “use AI for good”, we shall be very mindful of the training data we use to develop the machine learning models. For example, attention shall be given to the fact that training data be diverse and neutral – free from gender bias or ethnical connotations. Mimicking human decisions previously taken may also not be ideal, as this can perpetuate historical or social inequalities. Many studies have been carried out in this respect and shed light on the risks of biased decisions or presented discriminatory outcomes of AI systems that were trained on biased data, as this critique is not really new, and this paper will not delve into them. Relevant to the purposes of this paper, however, is the fact that, as of today, it is unlikely that the application of AI reasoning would not perpetuate the conscious or unconscious biases that our societies are pervaded from, without a clear indication or rule on how to ensure that training data are diverse, neutral and unbiased. Such perpetuation would thus result in mistrust over AI technologies and, even worse, in the production of distorted results. The clash is, therefore, evident: we cannot hope to exploit
174
V. Benedetti del Rio
AI to solve gender inequalities and injustices until we are able to ensure that the AI systems stop propagating or are able to overcome the prejudices and biases of our society. Another extremely relevant aspect that shall be considered before entrusting AI with the task of reaching the United Nations’ SDGs is that of the sustainability of AI in itself. Some commentators, indeed, have underlined that AI systems, together with all other aspects of our societies, shall be developed and trained in a way that is sustainable for our economies and environment. Indeed, the development and training of AI is highly dependent on great computing power, which results in high energy demands. The issue becomes also an ethical one, if we turn to consider the relevant amounts of investments that are necessary for the creation of AI systems, as well as the time and resources needed to perfect them prior to their practical use. The critique moved to AI, therefore, is that it absorbs huge economical resources that could instead finance direct solutions to the completion of the SDGs. In the following paragraphs, we will consider the AI Regulation Proposal in further depth in order to identify the aspects of the proposal that are linked to, or likely facilitate, reach of the SDGs. Later on, the potential pitfalls of the approach put forth in the AI Regulation Proposal will be discussed.
3 Characteristics of the AI Regulation Proposal That May Foster the Achievement of SDGs As mentioned, the European Commission AI Regulation Proposal is clearly set to foster the achievement of SDGs by supporting the development of AI in ways that ensure AI is applied for good. How does the European Commission envisage to do that? First of all, by choosing among the legislative instruments that it can dispose of, to propose the adoption of a regulation. Regulations are legislative acts that, once approved, become directly applicable in the legislative frameworks of all European Union member states, without any need for the member states to adopt additional adequacy measures and thus also without the related time to implement them. This can be done in those areas, and the single market is one of them, where the European Union has been given legislative powers by the same member states. The choice of a regulation, therefore, supports the need to create a uniform framework, a legal standard of definitions and scope that have a common ground at EU level. The adoption of one single regulation helps avoid fragmentation and improves the chances of the deployment of a unique single market for AI systems and applications, also avoiding trans-border jurisdictional issues among the European member states. Another tool that the European Commission uses to ensure wide application of its proposal, which is again related to the law drafting technique and already used in other fundamental pieces of legislation, is that of extending the application of the AI Regulation Proposal beyond the territorial borders of the European Union. This is
Ethical AI: The European Approach to Achieving the SDGs Through AI
175
done by including specific provisions that define the circumstances, in which the rights and freedoms of EU users shall be protected, by applying the European standards provided by the AI Regulation Proposal, regardless of the fact that said users or companies are specifically citizen of or based within the European Union borders. Indeed, the AI Regulation Proposal is set to apply to (i) providers placing AI systems or AI services in the EU, irrespective of the place where the providers are based; (ii) users of AI systems or services that are in the EU, regardless of whether they are citizens of the EU; and (iii) providers and users of AI located outside of the EU, where the output produced by the AI system is used in the EU. Simply put, if an AI system goes anywhere near the EU, it will be subject to the AI Regulation Proposal. And the practical implications for companies are clearly set. With this approach, in fact, the European Commission wishes to create a set of rules that is so widely used that it rapidly becomes the legislative standard for that technology – a pervasive AI regulation can result in the EU being the entity that will set the rules, definitions and standards for this new technology – resulting in a competitive advantage for EU companies over those of the rest of the world. Indeed, if the AI systems are developed in the EU, they will benefit from the fact that they comply with the legislative standard set for the technology since their design phase, while developers in other regions of the world may need to adapt their products, which may result in limitations of applications or additional investments. Moving into the actual text of the AI Regulation Proposal, an aspect that may foster the achievement of the SDGs is the prohibition of all AI system and services that create an unacceptable risk for the rights and freedoms of the potential individuals involved. According to the AI Regulation Proposal, in fact, the prohibited AI uses listed in Title II include those that contravene EU values, for example, because they violate fundamental human rights. In excluding the use – but, at a closer look, also the development, research and study – of AI systems that pose an unacceptable risk for individuals, the European Union is actually fostering a number of SDGs, such as goal n. 3, good health and well-being; goal n. 4, quality education; goal n. 5, gender equality; goal n. 10, reduce inequalities; and goal n. 16, peace, justice and strong institutions. To give an example of the impact of this aspect on the achievement of the goals now mentioned, consider that the AI Regulation proposal bans applications of AI that could result in social scoring, i.e. a system according to which people’s access to jobs or opportunities depend on a number associated to their previous actions and behaviour – something that clearly contravenes the values of equality and equal opportunities of our society that are the basis for the achievement of the mentioned SDGs. Other aspects relevant to the facilitation of the achievement of the SDGs include the auditability of AI reasoning, the equity of potential outcomes, human-centricity and the protection of human rights. These aspects are included in the AI Regulation Proposal in a number of articles that set the requirements for high-risk AI systems. In particular, the requirement to train, validate and test AI systems (Article 10 of the AI Regulation Proposal) is relevant in order to achieve equity of outcomes and minimise biases or unfair results, thus fostering the fulfilment of goal n. 5, gender equality – notably with respect to the targets referred to ending all forms of
176
V. Benedetti del Rio
discrimination against all women and girls as well as ensuring equal opportunities for leadership at all levels of decision-making in political, economic and public life. Indeed, although AI software applied to the recruitment process have poorly performed in the past, reportedly discriminating women candidates and favouring men, AI software applied to the language used in job posting has shown positive results in reducing gendered language in favour of gender-neutral wording, fostering inclusion and improving diversity of the workforce.15 Goal n. 10, reduced inequalities, is also impacted by this requirement of the AI Regulation Proposal, in that it offers to help empower and promote the social, economic and political inclusion of all and to ensure equal opportunities and reduce inequalities of outcomes. An example of application is the use of AI systems to detect fake news: as fake news are often used to inflate stereotypes and discriminate, the use of AI to detect them and subject them to a further revision prior to publication on social media or other web spaces could help avoid discriminations to certain groups or ethnicities and improve overall equality. In addition, for high-risk AI systems, developers are required to ensure that the AI systems are designed and developed in a way that makes it possible to collect automatic recording of events (logs), so that the AI systems’ functioning is somehow traced and can be looked into or analysed, a posteriori (Article 12 of the AI Regulation Proposal). This helps ensure that the AI system in question is auditable, and the reasoning behind inferences or logic conclusions is explicable. Explicability of results is paramount to goal n. 16, peace, justice and strong institutions, because it promotes the rule of law and helps develop transparent institution. Another characteristic that is set to ensure auditability as well as protection of users’ rights is that of ensuring transparency of AI systems, which translates into explanations to be provided to users on what the AI systems is able to do and how it will do it. Notably, the provision of information to users regarding the functioning and operation of the AI system provided by Article 13 of the AI Regulation Proposal follows a similar path of transparency obligations that counts many examples among EU legislative acts, such as the EU Consumer Directive16 and the General Data Protection Regulation.17 Transparency, therefore, is already a standard requirement in EU law, and one that is necessary in all AI applications. Indeed, transparency on Artificial Intelligence and Gender Equality, United Nations Educational, Scientific and Cultural Organization Key findings of UNESCO’s Global Dialogue, available at: https://en.unesco.org/system/files/artificial_intelligence_and_gender_equality.pdf, last accessed in October 2021. 16 Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules, available at: https://eur-lex.europa.eu/eli/dir/2019/2161/oj, last accessed in October 2021. 17 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679, last accessed in October 2021. 15
Ethical AI: The European Approach to Achieving the SDGs Through AI
177
capabilities and objectives of an AI system, as well as on the information that were used to train the AI model is necessary to ensure auditability of the AI system. In addition, transparency on training data is paramount to ensure that people impacted by AI decisions or reasoning have the possibility to challenge said decisions or results. Transparency, therefore, is necessary in all technological or AI applications that do not directly allow users to understand their functioning, including also those directed at fostering the achievement of the SDGs. Finally, Article 14 of the AI Regulation Proposal includes the requirement for high-risk AI system to be equipped with human-machine interface tools, which allow human oversight of the AI system in use. The provision for human oversight responds to the need to ensure human centricity of this new technology – a sort of emergency brake for a technology that may be evolving too fast to ensure it is always developing within the boundaries of the EU laws. The provision of human oversight, nonetheless, represents an advantage for the achievement of the SDGs, and notably goal n. 16, peace, justice and strong institutions, because it helps protect fundamental rights and freedoms and promote and enforce non-discriminatory laws and policies. Following the analysis of the aspects of the AI Regulation Proposal that can foster the achievement of the SDGs and push our societies in the same direction as to that traced by the UN 2030 Agenda, we now move to consider, in the coming paragraph, the aspects for which the AI Regulation Proposal may fall short, and that may represent obstacles to the achievement of the UN goals.
4 Aspects of the AI Regulation Proposal That May Hinder the Completion of the SDGs Following the analysis of the provisions of the AI Regulation Proposal that may contribute to the achievement of the UN goals, we now turn our attention to the aspects of the same proposal that may instead represent an obstacle to the achievement of the same goals. The first element that we will underline in this regard is an element that was described in the previous paragraph as an advantage for reaching the SDGs. Human centricity, in fact, is a requirement that can have a double connotation and has been identified also as a limit to innovation and an obstacle to the further development of AI. Indeed, human centricity is linked to the possibility for a human person to look into the machine decision-making process, in order to review it and possibly correct it or reverse any wrongdoing. When we consider this, it is clear that human centricity of artificial intelligence may be on one side a paradox, given that it is required to have a human being understand what humans were not able to do and accomplish in the first place (building instead an algorithm for them). On the other side, and more importantly, human centricity may be seen in contrast to the centricity and importance of other living beings, such as wild plants and animals; as such, human
178
V. Benedetti del Rio
centricity of AI may be in opposition to the goals n. 14 and 15 of the 2030 Agenda. In fact, 75% of the surface of our planet is covered by oceans, which represent almost all the living space of our planet, in terms of volume. Marine and costal biodiversity is the source of living for over three billion people, and the marine system is responsible for the absorption of 40% of the carbon dioxide we produce, while plants produce the oxygen that we breathe and represent the food that we eat, being the foundation to life on earth.18 Protecting life below water and life on land, therefore, requires putting other living beings’ interest at the centre of our research and efforts; human centricity of AI systems therefore is not only in direct contrast to that but also capable of jeopardising the advancements on the completion of these goals. Another issue that must be also considered when talking about AI and the requirements put forth by the AI Regulation Proposal is that auditability of the system, with precise logs and descriptions of the reasoning that led from an information to the inference of the following fact, is highly unlikely. This is due to the fact that auditable AI means explicable and reverse-engineerable AI, which in turns is in plain contrast with the protection of proprietary rights over the same AI system. The intellectual property rights behind an AI system are expected to be among the most relevant assets of AI – having to install a black box that explains all reasoning performed by AI may represent a huge risk for that AI, when audited, of explaining the connections between its decisions and revealing elements of the technology or the machine learning algorithm that would need, from a business perspective, a sound protection. Imposing an explainable AI, therefore, represents a risk for stakeholders of losing their investment and assets, therefore possibly being detrimental to the development of the full potential of AI technology. In this paragraph, instead, we will move to consider some aspects that the AI Regulation Proposal failed to address and which may represent, on one hand, a missed opportunity for the European Commission and, on the other hand, an obstacle to the achievement of the SDGs. Among the missed topics, there is the one related to lack of reference to energy efficiency or carbon emission budgets: a limitation on the emissions that can be put into the atmosphere in the whole process of projecting, designing and realising an AI system is indeed advisable at best, taking into consideration the climate crisis that we are living in. As mentioned, in fact, the objectives of the UN 2030 Agenda are quite ambitious; however, the timeframe to reach them is shortening: if we wish to find help in the use of innovative technologies to achieve our collective goals, we shall at least responsibly consider only those technologies that are sustainable in themselves. What is here intended is that by not fixing a carbon budget to AI solutions, the EU is generally supporting all AI systems and applications equally, without differentiating among those sustainably developed and those that, instead, may have benefitted of larger budgets and emissions. However, and again sustainability arguments turn quickly into ethics ones, our approach to AI should be responsible: considering, on one side, the relevant time
Further information is available at: https://in.one.un.org/page/sustainable-development-goals/ sdg-14/, last accessed in October 2021. 18
Ethical AI: The European Approach to Achieving the SDGs Through AI
179
that is necessary from design to production phase for AI and the tight timeframe that separates us to 2030 and, on the other side, the huge investments that AI requires to become applicable and usable in practice, isn’t it necessary to support, also from a financial point of view, only those solutions that would not worsen the environmental situation that we are living in? This indeed would also be linked to the achievement of the goal n. 13, the goal to take urgent action to combat climate change and its impacts, something that the proposal clearly missed to address. The reasoning behind the need to develop only sustainable AI is a mandatory one, if we also consider that we are now seeing great investments in this new technology. The technology and the infrastructure that it is going to be built is new and will be used for many years; therefore, it is of paramount importance to avoid, in this phase, unnecessary lock-ins in high-energy-consuming asset, or otherwise we will be stuck with nonefficient devices and technologies for years to come, something that could imply that AI have a negative impact on the achievement of goal n. 13. Lastly, AI is seen, by some commentators, as one of the biggest threats facing humanity, a sort of tech-gone-wrong scenario that some filmmakers have already depicted.19 The comprehensible doubts accompanying the deployment of any new technology, indeed, move from sentiment of distrust over something that appears as non-controllable, to fear of jobs loss, due to the automation of human activities or resorting to machines to perform non-basic human tasks such as decision-making. Privacy concerns may also add on the mentioned worries, and while the AI Regulation Proposal seems to address the trust issues by resorting to transparency and audit requirements as well as the ban on unacceptable AI, the risk of job loss remains a pressing issue that can represent a downside to the development of AI.20 Although the focus of the AI Regulation Proposal is that of regulating the development and deployment of AI, it is mandatory for governments to consider also the risks that the development of AI may have on the job market. Indeed, automation of jobs impacts different sectors in different ways, and the subsequent displacement that follows job loss may have a greater impact in lower-income communities, where there may be a minor specialization of the workforce – which in turn may increase poverty and social inequalities of the same communities and areas – in clear contrast to the UN goals n. 1 and 10.
5 Conclusions This paper described the characteristics of the AI Regulation Proposal and analysed the advantages and disadvantages that can come from its implementation to the achievement of the UN SDGs. See further at: From rogue AI to nuclear war, the 10 biggest threats facing civilisation | WIRED UK, last accessed in October 2021. 20 According to a 2019 McKinsey Global Institute report, available here the-future-of-work-in- america-full-report.pdf (wordpress.com), 39 million full-time jobs could be automated by 2030. 19
180
V. Benedetti del Rio
Although it is necessary for regulators to intervene with legislative frameworks and boundaries, when applying the law to technology, it is of paramount importance that any regulation does not block innovation. Indeed, in this respect, the AI Regulation Proposal manages to enter the AI field at a stage where this technology is still in development, and, in this context, the proposal correctly limits its impact to defining the ground rules to help the AI technology to thrive while at the same time safeguarding the rights and freedoms of the people involved. This approach allows the market to have its course while at the same time drawing some lines when foreseeable negative impacts are anticipated, in order to avoid undesirable consequences or harmful scenarios from happening. In addition, the method identified in the AI Regulation Proposal, to rely on a risk-based classification in order to determine the different set of rules that may apply to a certain AI system, allows the proposed text to survive technological changes and developments and become future proof. The real challenge of the AI Regulation Proposal is that of aspiring to become the worldwide legal standard for AI technologies. Indeed, while considering the positive impact that the EU legal standard may have on individuals, in that it strongly protects rights and freedoms of people when they interact with AI, it is also necessary to consider its limits: On one side, it is possible that other, more permissive, legislative frameworks may facilitate the development of more pervasive AI technologies in other parts of the world. Once a technology is in use, it is very difficult that regulatory efforts manage to limit it or confine its use. On the other side, as this paper depicted, the AI Regulation Proposal, in its first draft, presents pitfalls that hinder the development of our society in the only sustainable way currently possible. However, it is unlikely for a worldwide legal framework to be contrary to public policy; therefore, the objective of the upcoming legislative process that will take the current proposal to its final version is to address most, if not all, of them.
References AI for Good, Global Summit Report. 2017. Available at: https://www.itu.int/en/ITU-T/AI/ Documents/Report/AI_for_Good_Global_Summit_Report_2017.pdf. Last accessed in October 2021. Aldwairi, M., and A. Alwahedi. 2018. Detecting Fake News in Social Media Networks. Procedia Computer Science 141: 215–222. Also available at: https://www.sciencedirect.com/science/ article/pii/S1877050918318210. Last accessed in October 2021. Artificial Intelligence and Gender Equality, United Nations Educational, Scientific and Cultural Organization Key findings of UNESCO’s Global Dialogue. Available at: https://en.unesco.org/ system/files/artificial_intelligence_and_gender_equality.pdf. Last accessed in October 2021. Chui, M., R. Chung, and A. Van Heteren. Using AI to Help Achieve Sustainable Development Goals. United Nations Development Programme. Available at: https://www.undp.org/blog/ using-ai-help-achieve-sustainable-development-goals. Last accessed in October 2021. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – Fostering a European
Ethical AI: The European Approach to Achieving the SDGs Through AI
181
approach to Artificial Intelligence. Available at: https://eur-lex.europa.eu/legal-content/EN/ ALL/?uri=COM:2021:205:FIN. Last accessed in October 2021. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions – Artificial Intelligence for Europe. Available at: https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=COM%3A2018%3A237%3AFIN. Last accessed in October 2021. Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/ EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules. Available at: https://eur-lex.europa.eu/eli/ dir/2019/2161/oj. Last accessed in October 2021. Ethics Guidelines for Trustworthy AI, Artificial Intelligence High Level Expert Group. 2019. Available at: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. Last accessed in October 2021. Manyika, J., J. Silberg, and B. Presten. What Do We Do About the Biases in AI? Harvard Business Review. Available at: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Available at: EUR-Lex-52021PC0206-EN-EUR-Lex(europa.eu). Last accessed in October 2021. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX% 3A32016R0679. Last accessed in October 2021. The EU’s Approach to Artificial Intelligence, The International Institute for Strategic Studies, Volume 27 Comment 24, September 2021. Also available at: the-eus-approach-to-artificial- intelligence.pdf (iiss.org). Last accessed in October 2021. The Future of Work in America, McKinsey Global Institute, Report. 2019. Available here the- future-of-work-in-america-full-report.pdf (wordpress.com). The State of Food and Agriculture, Food and Agriculture Organization of the United Nations, Report. 2019. Available at: http://www.fao.org/state-of-food-agriculture/2019/en/. Last accessed in October 2021. Van Wynsberghe, A. 2021. Sustainable AI: AI for Sustainability and the Sustainability of AI. AI and Ethics 1: 213–218. Available at: https://doi.org/10.1007/s43681-021-00043-6. Last accessed in October 2021. White Paper of the European Commission On Artificial Intelligence – A European Approach to Excellence and Trust. Available at: https://ec.europa.eu/info/sites/default/files/commission- white-paper-artificial-intelligence-feb2020_en.pdf. Last accessed in October 2021.
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable Development Through Value Chains Matthew Stephenson, Iza Lejarraga, Kira Matus, Yacob Mulugetta, Masaru Yarime, and James Zhan
Abstract Artificial intelligence (AI), along with other new technologies of the Fourth Industrial Revolution (4IR), can help drive sustainable development through what can be called ‘SusTech’ solutions. But how can these be supported by The views expressed are those of the authors and do not necessarily reflect the official policy of their institutions. The authors would like to thank several anonymous peer reviewers as well as Sean Doherty, Kimberley Botwright, and Jimena Sotelo, all from the World Economic Forum, for their helpful comments. Lead and corresponding author: Matthew Stephenson. M. Stephenson (*) Head, Investment Policy and Practice, Geneva, Switzerland e-mail: [email protected] I. Lejarraga Economic Counsellor, Development Centre, Organisation for Economic Co-operation and Development (OECD), Paris, France K. Matus The Hong Kong University of Science and Technology (HKUST), New Territories, Hong Kong e-mail: [email protected] Y. Mulugetta Energy and Development Policy, University College London (UCL), London, UK e-mail: [email protected] M. Yarime The Hong Kong University of Science and Technology (HKUST), New Territories, Hong Kong e-mail: [email protected] J. Zhan Investment and Enterprise, United Nations Conference on Trade and Development (UNCTAD), Geneva, Switzerland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_11
183
184
M. Stephenson et al.
governments, adopted by firms (especially in managing value chains), and encouraged by users? This chapter proposes a three-part solution: (1) The G20 should create a Sustainable Technology Board (modelled after the Financial Stability Board) as a mechanism for coordination, cooperation, and scaling of SusTech solutions; (2) governments can consider adopting policy and regulatory measures to help firms integrate SusTech solutions into value chains, including drawing from 11 concrete, actionable options; and (3) examples of how firms have already adopted SusTech solutions illustrate opportunity and inspire replication. Keywords AI · New technologies · Fourth Industrial Revolution · Sustainable Development Goals (SDGs) · Sustainable Technology Board · SusTech · SusTech solutions
1 Challenge There is wide consensus that scaling technology can help achieve sustainable development (Herweijer et al. 2020, p. 7; Diaz Anadon et al. 2016, p. 1; Habanik et al. 2019, p. 48; World Bank Group 2016, pp. 303–20).1 One of the main mechanisms is through greener, safer, and more inclusive value chains enabled by technology that can increase efficiency, transparency, resilience, and responsibility (Sotelo and Fan 2020, p. 13) (Fig. 1). This was already important before COVID-19, but the urgency has grown: value chains need to become more resilient to future pandemics; societies need to address inequality that has been exacerbated; and economies need to raise productivity to generate growth that can address record-high levels of debt. Adopting new technologies in ways that lead to sustainable development can help achieve these goals, what can be called ‘SusTech solutions’. SusTech is defined as the use of new technologies that help achieve Sustainable Development Goals (SDGs), either directly or indirectly. Directly would mean the technology is adopted to achieve a certain goal (e.g. lowering carbon use), while indirectly would mean the technology is adopted to achieve business efficiency but also provides additional benefits (e.g. lowering carbon use). The word ‘solutions’ is added to denote that these new technologies help governments and firms achieve their objectives. SusTech solutions are focused on technologies of the Fourth Industrial Revolution, given the new opportunities that these afford for sustainable development. Survey data has found that five specific technologies may have the most
Fourth Industrial Revolution technologies could have a high impact across 10 of the Sustainable Development Goals (SDGs), and 70% of the 169 targets underpinning the SDGs could be enabled by these technologies. 1
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 185
transformational impact: artificial intelligence (AI), blockchain, Internet of things, automation, and virtual reality.2 To provide one example, AI and automation can be used to help grow the circular economy. Smart recycling robots should soon be able to efficiently dismantle, analyse, and categorize electronic waste – ‘de-manufacturing’ and ‘re-manufacturing’ electronic objects and components – and in so doing both tap into an estimated US$ 62 billion electronic waste industry and help safeguard the planet (Enel 2020). Additional examples of existing SusTech solutions are presented in Part III of this chapter. So how can SusTech solutions be enabled and integrated into value chains in practice? This requires tackling three interconnected challenges: (1) Governance failures as new technologies are leaping forward in terms of their economic and social importance, but policy and regulatory frameworks are not keeping up and may not be fit for purpose. The speed and direction of technological change, as well as expanding knowledge gap between public and private sectors, challenge the use of traditional regulatory approaches. Governance failures can take place both at the domestic level and also at the international level, given the interconnectedness of economic and technological systems. Even worse, technology could actually undermine people and planet if negative impacts (e.g. on privacy, competition, climate, etc.) are not addressed. New technologies are, in and of themselves, neutral and require accompanying frameworks to avoid distortions and help orient them in support of societal goals.3 (2) Market and coordination failure is taking place at the global level because advances in technology that can drive sustainable development (i.e. SusTech) represent a form of public good that suffers from a collective-action problem as well as a complex system that suffers from coordination challenges; together, this is resulting in underinvestment and undersupply and calls for mechanisms to address collective-action and cooperation challenges. (3) A growing desire to reconfigure value chains through SusTech solutions to increase resilience and sustainability, but lack of widely known practical, actionable steps to do so. Both public and private actors wish to seize on reform appetite following COVID-19 to move from global value chains (GVCs) to Sustainable GVCs (SGVCs), and SusTech provides one of the keys to do so (Schmidt et al. 2019). This chapter will be structured in three parts to provide solutions to these three challenges. Sotelo and Fan (2020) and WBG (2016) identify the same list, with the only difference being virtual reality and automation, and so the chapter will address both. 3 Acemoglu et al. (2020) show that the tax system has favoured automation over labour as labour is heavily taxed while capital is not, creating incentives for firms to over-invest in automation as a labour-saving technology, undermining societal goals. 2
186
M. Stephenson et al.
2 Solution 2.1 Create a Sustainable Technology Board The G20 should create a Sustainable Technology Board (STB) as a mechanism for coordination, cooperation, and scaling of SusTech solutions. An STB is called for given the transformative potential of new technologies and to address the concern, confusion, and competition that is increasingly underlying their integration. There is an opportunity to pre-empt escalating techno-nationalism – and address societal concerns over techno-equity and integrity – through a mechanism that convenes key actors, provides analysis and options, and promotes cooperation over competition. As such, the platform could be mandated to help shape technology in a way that advances SDG-oriented value chains through active policies that guide technology in these directions. A first step in this direction is the decision by the G7 to convene a ‘Future Tech Forum’, together with the OECD, in September 2021, and on which the STB could build (G7 Communiqué 2021)4 (Fig. 1). Concretely, the STB would be structured to deliver three core functions.
Sustainable Technology Board Technologies for SusTech AI
Barriers to SusTech adoption
Blockchain Data IoT
Solutions to barriers
Infrastructure Skills
Automation and drones
Investment Regulation
Augmented/VR
Data trusts
TechFin
Homomorphic Encryption
Investment incentives
Typology of personal data
Non-equity modes of inv
Rightskilling
Performance-based regs
Sustainability impact assessments Equivalency agreements Living Labs/Reg Sandboxes
Coordination
Fig. 1 SusTech solutions snapshot
It is also worth noting that the USA and EU at the same time established a Trade and Technology Council, with the aim of addressing bilaterally similar issues, demonstrating the growing importance of collaboration on these issues. See European Commission, ‘EU-US launch Trade and Technology Council to lead values-based global digital transformation’, 15 June 2021, https://ec. europa.eu/commission/presscorner/detail/en/IP_21_2990. 4
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 187
2.1.1 Provide a Platform for Cooperation A platform where policymakers, firms, and experts, and civil society come together to identify needs, share both concerns and opportunities, and transparently chart out ways to integrate SusTech solutions in both regulatory frameworks and corporate strategies. Such a platform would provide a space for cooperation between national technology bodies. It would also provide a space for those at the frontier of technology innovation – especially in the private sector – to flag risks and opportunities so they can be addressed or seized.5 It would also create a mechanism for outreach, engagement, and inclusion of less-developed economies and smaller firms to help develop and adopt SusTech solutions. There is a growing risk of splintering into a two-speed world – the technology- rich and the technology-poor – and so actively collaborating on sharing SusTech solutions would be important to use technology as a societal integrator rather than allowing it to develop into a societal cleaver. One specific outreach mechanism could be through a Pioneer Program, whereby technology authorities in different jurisdictions would sign up to trial SusTech policies and measures, backed by the technical support of STB partners to help with capacity development.6 2.1.2 Generate Analysis and Options To inform platform-based cooperation, the STB would generate analysis and provide options. The analysis could include developments in new technologies, risks and opportunities that these generate, and good practices for how authorities and firms have addressed risks and seized opportunities. The emphasis would be on practical policy options and measures that could be adopted. Analysis could also track progress on goals, and whether policies and measures were effective in achieving their intended aims regarding technology adoption and sustainable development. Together, such analysis and options would support dissemination, replication, and scaling of SusTech solutions, both scaling up and scaling out. This process can be understood as taking place across three levels: an institutional level (creating an STB), an evaluative level (‘What technologies work in practice and how?’), and an operational level (‘To support the roll-out technologies that work, let’s adopt these policies and measures…’). Options for policies and measures will be outlined in Part II (Fig 2).
For instance, the sustainable value chains have been developed for key commodities through cooperation and commitments between key actors (e.g. in Sustainable Palm Oil), and a similar approach would be taken for technologies rather than commodities. 6 For instance, a Pioneer Program has been adopted by the G20 Global Smart Cities Alliance to trial smart city policies. See https://globalsmartcitiesalliance.org/?page_id=714. 5
188
M. Stephenson et al.
Institutional Level
• Sustainable Technology Board
Evaluative Level
•Artificial intelligence •Blockchain •Internet of Things •Automation and drones •Augmented/Virtual Reality
Operational Level
•Data trusts; Homomorphic encryption; Typology of personal data; Rightskilling; TechFin; Investment incentives; Non-equity modes of investment; Performance-based regulation; Sustainability impact assessments; Equivalency agreements; Living Labs/Regulatory Sandboxes
Fig. 2 Three levels: institutional, evaluative, and operational
2.1.3 Develop Standards and Guidelines In addition, one of the main goals of the STB would be to develop standards and guidelines on new technologies to facilitate their sustainable adoption. Standards and guidelines would apply to both business and national authorities. They would thus facilitate cooperation between economies, allowing for interoperability, alignment, and well-function systems. They would also at once create larger markets through interoperability as well as provide regulatory clarity, predictability, and stability. Conversely, the lack of standards and guidelines creates systemic risk in terms of governance, corporate returns, and consumer protection. Standards can come in various types, including in their scope and detail. For instance, standards could apply across technologies or be specific to a certain technology.7 In addition, standard setting for SusTech could begin with general principles, evolve into more detailed practices, and finally generate specific guidelines. Starting with ‘soft’ or voluntary standards could overcome the challenge of competition between different economic systems or visions. A valid concern is how economies with very different approaches to technology governance can fruitfully cooperate through an STB. The answer is to first develop soft standards that are adopted on a voluntary basis. Perhaps after a critical mass of economies adopt a soft standard – because it proves useful in practice – it can be viewed as a ‘firm’ standard, one that is widely accepted but still not a binding ‘hard’ standard. In practice ‘firm’ standards may often be sufficient for planning and collaboration between
See, for instance, World Economic Forum, ‘Internet of Things Guidelines for Sustainability’. January 2018, http://www3.weforum.org/docs/IoTGuidelinesforSustainability.pdf. 7
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 189
economies and firms. Where relevant, the STB can build on or adopt ISO work on sustainability standards.8 2.1.4 Precedent and Practice There is strong G20 precedent for creating an STB. The STB could be very similar to – and modelled after – the Financial Stability Board (FSB)9, which was established following the G20 summit in London in 2009. A more recent example includes the G20 Global Smart Cities Alliance on Technology Governance, which was established following the G20 Summit in Osaka in 2019.10 The G20 Global Smart Cities Alliance, which provides a platform for cooperation on smart cities, recently published a Global Policy Roadmap that outlines good practices and proposes certain principles to integrate technology into ‘ethical, smart cities’.11 Both of these examples provide compelling precedents to establish an STB. How do the FSB and Smart Cities Alliance function, and how could this be replicated? FSB policy options and standards are not required to be adopted by members, but rather encouraged through dialogue, discussion, and reports to the G20. In other words, national authorities retain policy autonomy. The Smart Cities Alliance is also voluntary, with the World Economic Forum acting as a secretariat. The STB could operate similarly, developing voluntary standards and principles and being housed, for instance, in the World Economic Forum’s Centre for the Fourth Industrial Revolution, whose mission is to help maximize the benefits of technology while avoiding potential risks.12 At the same time, G7 leaders have recently supported mandatory disclosure of climate-related financial information based on the FSB’s Task Force on Climate-Related Financial Disclosures (TCFD) framework. A similar mechanism of information disclosure could also be considered for issues related to SusTech.
See ISO, ‘Sustainability standards from ISO’, https://iso26000.info/sustainability-standards-from-iso/. The FSB is organized around three standing committees, namely, a Standing Committee on Supervisory and Regulatory Cooperation, a Standing Committee on Assessment of Vulnerabilities, and a Standing Committee on Standards Implementation. These align with the proposed functions of an STB, which could be organized similarly. See Financial Stability Board, ‘About the FSB’, https://www.fsb.org/about/#mandate. 10 See G20 Global Smart Cities Alliance, ‘About the Alliance’, https://globalsmartcitiesalliance. org/?page_id=107. The World Economic Forum serves as the secretariat of the Alliance. 11 These include principles on (a) equity, inclusivity, and social impact, (b) openness and interoperability, (c) security and resilience, (d) privacy and transparency, and (e) operational and financial sustainability, which could inspire the development of STB principles. See https://globalsmartcitiesalliance.org/?page_id=90. 12 See Centre for the Fourth Industrial Revolution, https://www.weforum.org/centrefor-the-fourth-industrial-revolution. 8
9
190
M. Stephenson et al.
2.2 Barriers and Solutions to SusTech Adoption There are six main types of barriers to wider adoption of SusTech. These include (1) data, (2) infrastructure, (3) skills, (4) finance and investment, (5) regulation, and (6) coordination (Fig. 3). Data is the lifeblood of technology systems. Just as humans need blood to course through their bodies to function, technologies need data to flow both within and between systems to function. This, in turn, requires sufficient volume, trust, and interoperability. While data policy is increasingly tense and disputed – with differing visions between G20 economies – this paper proposes three data ‘landing zones’ to break impasse through finding common ground. If data is the lifeblood, infrastructure is the highway. One can take ‘secondary roads’ but it will take longer, you may hit a pothole, and you may never find your destination on account of poor signage. Much the same way, fit-for-purpose infrastructure is needed if firms are to adopt SusTech solutions, as otherwise they will be
Data
Coordination
Infrastructure
Barriers to SusTech adoption Regulation
Skills
Investment
Fig. 3 Barriers to SusTech adoption
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 191
limited to ‘secondary-road technologies’. This includes, inter alia, infrastructure for transportation, communication, connectivity, processing, and storage. Skills, in turn, are the new passport, and right-skilling the new visa. Worryingly, large swathes of society risk being locked out of growing global markets by not having needed digital skills. While policymakers talk about upskilling and reskilling, the goal may actually be right-skilling: matching skills to technologies (Ross et al. 2018). Building infrastructure and growing skills require investment and finance in technologies, or TechFin. Investment needs are enormous, but the capital is there. However, investments are not taking place to scale as outdated regulatory frameworks are creating undue risks. This calls for updating regulatory frameworks to create ‘digital-friendly investment climates’ (Stephenson 2020). It also calls for public-private, innovative financing mechanism targeted at technology, what could be called ‘technology finance’ or TechFin. This may be especially needed for SusTech rollout in developing markets. None of this can effectively take place absent coordination and regulation. These are, one might say, the ‘field’ and ‘rules’ that allow for collaboration to take place both directly on technology and on the other barriers, namely, data, infrastructure, skills, and investment. Absent effective coordination and agreed regulation, one can find oneself in a situation where one team thinks the game is football and the other rugby and picks up the ball to run, or one team is on field A and the other waiting on field B to play the game. While this may sound humorous, at present the lack of clear and agreed regulation – coupled with the lack of mechanisms of coordination – means countries and firms run the risk of doing just that. Policymakers may thus wish to consider targeted policies and measures to address these barriers and enable SusTech solutions. The reason is that these technologies are new, and therefore policy and regulatory frameworks have not kept up. While some technologies may require specific policies and measures (i.e. vertical in nature), this chapter will propose 11 policies and measures that apply across all technologies (i.e. horizontal in nature). In addition to helping lay the groundwork for the adoption of technologies, the added benefit of considering horizontal interventions is that new technologies work in bundles and so require to be enabled together (UNCTAD 2017, p. 176; Sotelo and Fan 2020, p. 4) (Fig. 4). 2.2.1 Establish ‘Data Trusts’ to Share Data Safely and Securely Data trusts are legal structures that serve as a fiduciary (or third-party steward) for data provided by members of the trust and govern the data’s use. Data trusts thus allow organisations to give some control over their data to a new institution so that data can be shared and aggregated (Open Data Institute 2019; WEF and McKinsey 2019). Large-scale aggregation may be essential to accrue full benefits from SusTech, given that data present increasing returns to scale for SusTech solutions. Two-thirds of firms across all industries report they would be willing to share data
192
M. Stephenson et al.
Data trusts
Living Labs and Regulatory Sandboxes
Homomorphic encryption for data sharing
Equivalency agreements on standards
Typology of personal data
Solutions to SusTech adoption
Sustainability impact asssesments
Performancebased regulation
Rightskilling
TechFin
Non-equity modes of investment
Investment incentives
Fig. 4 Solutions to SusTech adoption
with the right conditions, and data trusts can help provide those conditions (Zarkadakis 2020). 2.2.2 Use Homomorphic Encryption to Share Data Safely and Securely Homomorphic encryption can also be used to share data, either as a complement or an alternative to data trusts. Homomorphic encryption makes it possible to analyse encrypted data without revealing the data’s content. It thus allows for sharing data safely and securely, whether the data is sensitive or personal or whether it is being shared with a jurisdiction that has a different standard of data protection and privacy. This opens up the increasing returns from data flow and aggregation even
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 193
absent agreement on data policy (Zafrir 2020). It also opens up access to the 80% of datasets that are currently private, whether in the hands of governments or firms.13 2.2.3 Adopt a Typology for Data to Facilitate Management and Sharing The challenge to data policy to enable SusTech solutions relates to personal data, not corporate data. Firms can manage corporate data for commercial ends if the data are allowed to flow between jurisdictions, but individuals often do not have the same oversight and control. As a result, governments have sought to protect personal data, but this has also erected barriers to its use. The solution lies in differentiating data by type and adopting differential regulation: firm data (f-data), official personal data (o-data), privy personal data (p-data), and collective personal data (c-data).14 f-data is owned and controlled by firms, who can choose to share it or not (e.g. patterns in sales in different markets). o-data is created and authenticated by the state but controlled by people (e.g. a passport number). c-data is shared within a well-defined group governed by certain rules (e.g. aggregated data from banking cooperatives). p-data is created by people, either directly through first-order p-data (e.g. photos put online) or indirectly through second-order p-data (e.g. location data from smartphones).
f-data should be allowed to flow freely both within and across economies, following corporate agreements between parties (WEF 2020). o-data, c-data, and p-data should be in the hands of people, who can decide whether to share it (and on what terms) or not. o-data would likely not be shared; c-data would be shared to achieve certain objectives; and p-data might be shared depending on compensation (financial or non-financial, such as services). 2.2.4 Ensure Right-Skilling Programmes Match Skills Supply to Skills Demand One of the greatest limiting factors to adopting SusTech solutions is skills. The basket of skills needed to understand, adopt, apply, and develop technologies is quickly changing and risks leaving people or economies behind. The solution lies in
World Wide Web Foundation, Open Data Barometer, September 2018 in Herweijer et al. (2020), footnote 91. 14 Snower et al. (2020) provide the typology for personal data. 13
194
M. Stephenson et al.
public-private dialogue and training to match skills supplied to skilled demanded. First, firms need to be asked what skills are needed to enable SusTech; second, government need work with universities and other centres of excellence to help develop those skills15; third, mechanisms need to be created for this process to continue, monitoring and adapting as technologies evolve. 2.2.5 Develop Innovative Technology Finance (TechFin) Instruments Both the development and adoption of SusTech require resources, and so policymakers may wish to support technology finance (TechFin) to help with uptake and rollout. Specific instruments could include blended finance, government-backed incubators and accelerators, patient or concessional capital, funds and prizes, and public procurement (Herweijer et al. 2020, p. 33). 2.2.6 Orient Investment Incentives to Encourage the Uptake of SusTech Solutions Governments can use a number of investment incentives to encourage capital to flow into SusTech solutions. These include both financial and non-financial incentives. Financial incentives could include tax breaks, grants, or subsidies. Non- financial incentives could include faster approvals, lighter or expedited regulatory review, or operational support to encourage the uptake of SusTech solutions. 2.2.7 Incorporate Non-equity Modes or Strategic Partnerships in Domestic and International Policy Frameworks Evidence suggests that non-equity modes of investment (NEMs) or strategic partnerships have been growing in importance and are prevalent in the digital economy and high-tech investments.16 Strategic partnerships are much more flexible than FDI, allowing firms to respond quickly to fast-paced technical changes and evolving market conditions. They are also increasingly deployed as a means to obtain rapid access to knowledge, technology, and intangible assets. Despite their importance, NEMs are not adequately covered in many domestic and international policy frameworks, resulting in lower predictability for contract-based corporate relationships. Policymakers may wish to ensure that regulatory frameworks are updated to support cross-border NEMs that can drive SusTech solutions. Apprenticeships may be particularly useful, whereby part of the training is done by and within companies, in co-operation with universities, and with support or other incentive from government for firms that invest in such training. 16 This finding can be seen through a new dataset covering about 27,000 corporate relationships of 147 multinational enterprises (MNEs) in 13 sectors. See Andrenelli et al. (2019). 15
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 195
2.2.8 Use Performance-Based Regulation to Balance Flexibility with Oversight The challenge with supporting SusTech is to get the balance right between flexibility and oversight. This allows for new technologies to bloom while also protecting societies from untoward outcomes. One innovative solution is to apply performance- based regulation (PBR). The idea is to focus on desired, measurable outcomes, rather than prescriptive processes, techniques, or procedures (United States Nuclear Regulatory Commission 2021). In essence, the goal is specified, but not the path to get there, which is left up to firms, allowing regulatory objectives to be met in creative and effective ways. PBR – which represents a close cousin to the increasingly popular risk-based regulation (WBG 2017) – can be flanked by periodic reviews to ensure it is working as desired. 2.2.9 Use Sustainability Impact Assessments Another way to support SusTech is through the use of Sustainability Impact Assessments (SIAs) by both regulatory agencies and firms. SIAs – which again represent a close cousin to the increasingly popular Regulatory Impact Assessments (RIAs) (WBG n.d.) – can be used to proactively identify potential benefits and drawbacks across technologies. This provides more transparency about impacts and necessary trade-offs, since generally technologies involve making decisions about trade-offs across societal objectives. SIA can therefore help to develop and adopt mitigation measures to any negative impact, including displaced workers, anticompetitive practices, etc. 2.2.10 Ensure Equivalency Agreements on Standards and Certifications As a first step to facilitating cooperation on SusTech adoption – and absent the development of standards by an STB – G20 policymakers may wish to consider equivalency agreements on SusTech-related standards and certifications. This could significantly support SusTech efforts by creating larger markets for investment and operations. Standards and certifications increase predictability and quality, providing confidence to consumers and firms, yet history shows they are often developed in an uncoordinated and inconsistent manner between jurisdictions, forming a significant barrier to cross-border commercial activities. 2.2.11 Build Living Labs and (International) Regulatory Sandboxes A final way to allow regulatory flexibility and innovation for SusTech solutions to bloom – while also safeguarding societal interests – is the use of living labs and regulatory sandboxes. These create the space for a more permissive testing of
196
M. Stephenson et al.
SusTech applications, while circumscribing potential risk. Regulatory sandboxes can thus generate learning on SusTech solutions through experimentation. Living labs and regulatory sandboxes need to be both flexible to accommodate the uncertainties of innovation, and precise enough to impose society’s values on emerging innovation (Yarime 2020). Moreover, international regulatory sandboxes can be created so that experiments and collaboration can be conducted jointly across jurisdictions, creating more legitimacy for setting shared global standards and guidelines.
2.3 Examples of SusTech in Action How have firms already integrated SusTech solutions into operations to increase profits, resilience, and sustainability? Real-life examples illustrate how SusTech solutions are already being successfully adopted, providing models for how further SusTech solutions roll out in practice. Such rollout is currently taking place in a piecemeal, disjointed way, motivating creation of an STB (Part I) and adoption of enabling policies and measures (Part II) so that more economies and more firms can benefit from SusTech solutions. This section (Part III) will focus on five of the technologies that have been identified as having amongst the most long-term transformational impact through recent analysis and surveys: artificial intelligence, blockchain, Internet of things, automation, and virtual reality (Fig. 5).17 2.3.1 Artificial Intelligence AI could increase global gross domestic product (GDP) by $15.7 trillion by 2030, according to PwC estimates (PwC 2017). Some firms are already starting to seize this potential, but there is a scope for huge scaleup. For instance, AI can be used for financial inclusion, especially to provide financial services to those that do not have a formal credit history. Machine-learning algorithms, such as those of Aire, can use mobile phone activity, and other digital footprints, to evaluate creditworthiness and help provide financial services to new market segments. Similarly, Eastnets’ approach is to use AI to detect financial fraud. Another example is ClearMetal, which has adopted AI for predictive logistics and supply chain management that allows it to predict transit delays and optimize routes, saving shipping costs, increasing timing accuracy, and avoiding unnecessary backups and backlogs (Nguyen 2020; ClearMetal 2017). AI can also help for sustainable energy, for instance Moxia is using AI-powered energy
Sotelo and Fan (2020) and WBG (2016) identify the same list, with the only difference being virtual reality and automation, and so the chapter will address both. 17
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 197
Artificial intelligence
Augmented /virtual reality
Automation and drones
Blockchain
Top SusTech technologies
Internet of things
Fig. 5 Top SusTech technologies
management software that allows smart energy storage through batteries and sharing through grids (George 2020). 2.3.2 Blockchain Blockchain holds perhaps the most transformative potential in terms of technology’s impact on sustainable development. For instance, blockchain technology can help ensure inputs are sourced responsibly (e.g. diamonds through Everledger), sustainably (e.g. tuna through Provenance), and efficiently (e.g. creating mechanisms for peer-to-peer exchange of excess solar energy through Powerledger) (Adams et al. 2018, p. 134; Ahl et al. 2020). Another example is in the Democratic People’s Republic of the Congo (DRC), where Cobalt Blockchain is tracing the provenance of cobalt to allow for identification of any malpractice along the supply chain. Rather than eschewing sourcing from the DRC because of the risk of supporting human rights violations, manufacturers now have the confidence to purchase from the DRC, increasing sustainable development (Herweijer et al. 2020, pp. 21–22).
198
M. Stephenson et al.
2.3.3 Internet of Things The Internet of things (IoT) is also foreseen to be a game changer for both growth and sustainability. In terms of growth, estimates suggest it could add $14 trillion in economic value to the global economy by 2030 (WEF 2018, p. 3); in terms of sustainability, IoT can dramatically improve efficiency and outcomes in, inter alia, agriculture, transportation, energy, and smart cities. For instance, BBVA has installed 50,000 sensors in its Madrid headquarters to detect and collect data about the status of the facilities, environmental conditions, and the presence of people, allowing it to save 5,766,731 kWh on energy. This represents savings of 12–15% compared to before and is equivalent to the energy of about 1,900 households per annum (BBVA 2019). 2.3.4 Automation and Drones Automation holds both risks and rewards for sustainable development, a clear case where Sustainability Impact Assessments can help evaluate impact. On the one hand, workers are likely to be displaced; on the other, automation in, inter alia, factories, transportation, health, and agriculture can both increase worker safety and allow them to move to more value-addition work, if retrained and right-skilled, while also saving cost, energy, and time through optimization. Estimates in the USA predict such efficiency improvement may result in reduced carbon dioxide and harmful particulates by up to 60% (Bösch et al. 2018). For instance, drone delivery by firms such as Amazon, DHL, Google, and UPS is expected to improve corporate carbon footprint, with one study in Thailand finding that the ‘online shopping system using drone delivery is one of the most environmentally friendly transportation options throughout a wide range of scenarios’ (Koiwanit 2018a, b). 2.3.5 Augmented and Virtual Reality Augmented and virtual reality (AR/VR) holds the potential to transform everything from education and healthcare to mining and tourism. The risk is that currently only a small segment of the world’s population is benefitting from AR/VR, prompting the need for targeted support of this particular SusTech solution (Bogdan- Martin 2021). For instance, firms like Proprio, ImmersiveTouch, TrueVision, and EchoPixel have been using AR/VR to improve the quality and accuracy of surgery (Daley 2021). In addition, lockdowns brought about by the COVID-19 pandemic have catapulted interest in learning through AR/VR, such as that being provided by Google, Microsoft, and ARVR Academy (Immersive Learning News 2020).
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 199
3 Conclusion SusTech solutions have the potential to transform our world. AI, along with other new technologies of the Fourth Industrial Revolution, can help achieve the SDGs. However, this will require key enablers. First, a new Sustainable Technology Board (STB) can provide a platform for cooperation on accelerating uptake and growing impact from SusTech, including AI. It can provide analysis and outreach to both increase and widen the potential benefits of new technologies for development – while mitigating and addressing any negative effects – including through standards and policy recommendations. Second, certain policies and measures can help address barriers to the adoption of SusTech solutions. This chapter sets out 11 horizontal actions policymakers might wish to consider to enable new technologies. The STB can further refine and add to these as part of its mandate. The STB can also facilitate legitimate policy experimentation to test new approaches to both stimulate and govern new technologies. Third, there is the realization that firms are already starting to adopt SusTech solutions, demonstrated through concrete examples in AI, in addition to blockchain, IoT, automation, and AR/VR. However, this is happening in a piecemeal and disjointed way; public and private sectors can work together to accelerate, deepen, and scale this trend. The G20 should act now. G20 economies stand most to gain from SusTech solutions in the short term as they have the absorptive capacity to integrate new technologies. Yet because of the public good nature of implementing SusTech solutions, cooperation will ‘increase the pie’, and the G20 has the critical mass to create effective cooperation mechanisms. If this happens, it will lead to benefits for non-G20 economies, both through knowledge spillovers and opportunities for non-G20 economies to plug into value chains in new ways, including through new types of digital services exports, benefiting all economies.
References Acemoglu, Daron, Andrea Manera, and Pascual Restrepo. 2020. Taxes, Automation, and the Future of Labor. MIT Research Brief. October 2020. https://workofthefuture.mit.edu/wp-content/ uploads/2020/10/2020-Research-Brief-Acemoglu-Manera-Restrepo.pdf. Adams, Richard, Beth Kewell, and Glenn Parry. 2018. Blockchain for Good? Digital Ledger Technology and Sustainable Development Goals. In Handbook of Sustainability and Social Science Research, 127–140. Cham: Springer. https://link.springer.com/ chapter/10.1007/978-3-319-67122-2_7. Ahl, Amanda, Masaru Yarime, Mika Goto, Shauhrat Chopra, Manoj Kumar Nallapaneni, Kenji Tanaka, and Daishi Sagawa. 2020. Exploring Blockchain for the Energy Transition: Opportunities and Challenges Based on a Case Study in Japan. Renewable and Sustainable Energy Reviews 117: 109488. https://www.sciencedirect.com/science/article/abs/pii/ S1364032119306963.
200
M. Stephenson et al.
Anadon, Laura Diaz, Gabriel Chan, Alicia G. Harley, Kira Matus, Suerie Moon, Sharmila L. Murthy, and William C. Clark. 2016. Making Technological Innovation Work for Sustainable Development. Proceedings of the National Academy of Sciences 113 (35): 9682–9690. https:// www.pnas.org/content/113/35/9682.short. Andrenelli, Andrea, Iza Lejárraga, Sébastien Miroudot, and Letizia Montinari. 2019. Micro- Evidence on Corporate Relationships in Global Value Chains: The Role of Trade, FDI and Strategic Partnerships. OECD Trade Policy Papers, No. 227. https://doi.org/10.1787/ f6225ffb-en. BBVA. 2019. Artificial Intelligence and Green Algorithms Contribute to Improved Energy Efficiency at BBVA Headquarters. 9 October 2019. https://www.bbva.com/en/artificial-intelligence-and- green-algorithms-contribute-to-improved-energy-efficiency-at-bbva-headquarters/ Bogdan-Martin, Doreen. 2021. What taking VR and AR Mainstream Means for Sustainable Development. World Economic Forum. 17 February 2021. https://www.weforum.org/ agenda/2021/02/virtual-reality-augmented-reality-sustainable-development/ Bösch, Patrick M., Felix Becker, Henrik Becker, and Kay W. Axhausen. 2018. Cost-Based Analysis of Autonomous Mobility Services. Transport Policy 64: 76–91. https://www.sciencedirect.com/science/article/pii/S0967070X17300811?via%3Dihub. Carbis Bay G7 Summit Communiqué. 2021. Our Shared Agenda for Global Action to Build Back Better. 13 June 2021. https://www.consilium.europa.eu/media/50361/carbis-bay-g7-summit- communique.pdf ClearMetal. 2017. Artificial Intelligence in Supply Chain: Solve the Data Problem First. 12 December 2017. https://www.clearmetal.com/news/2017-12-22-2018-outlook-the- year-global-shipping-embraces-ai Daley, Sam. 2021. 10 Companies Using VR and Augmented Reality to Improve Surgery. 22 March 2021. https://builtin.com/healthcare-technology/augmented-virtual-reality-surgery Enel. 2020. Waste Not, Want Not: The Smart Recycling Robot. 15 September 2020. https://www. enel.com/company/stories/articles/2020/09/artificial-intelligence-circular-economy European Commission. 2021. EU-US Launch Trade and Technology Council to Lead Values-Based Global Digital Transformation. 15 June 2021. https://ec.europa.eu/commission/presscorner/ detail/en/IP_21_2990 George, Sarah. 2020. World’s Largest Network of AI-Enabled Residential Batteries Doubles in Size. Edie Newsroom. 22 July 2020. https://www.edie.net/news/8/ World-s-largest-network-of-AI-enabled-residential-batteries-doubles-in-size/ Habánik, Jozef, Adriana Grenčíková, and Karol Krajčo. 2019. The Impact of New Technology on Sustainable Development. Engineering Economics 30 (1): 41–49. https://inzeko.ktu.lt/index. php/EE/article/view/20776. Immersive Learning News. 2020. 3 Companies Working on Education in VR and AR. 20 March, 2020. https://www.immersivelearning.news/2020/03/20/3-companies-working- on-education-in-vr-and-ar/ Financial Stability Board. About the FSB. https://www.fsb.org/about/#mandate G20 Global Smart Cities Alliance. “About the Alliance”. https://globalsmartcitiesalliance. org/?page_id=107. Herweijer, C., B. Combes, A. Gawel, A.M. Engtoft Larsen, M. Davies, J. Wrigley, and M. Donnelly. 2020. Unlocking technology for the Global Goals. World Economic Forum and PwC. January 2020. http://www3.weforum.org/docs/Unlocking_Technology_for_the_Global_Goals.pdf. ISO. Sustainability Standards From ISO. https://iso26000.info/sustainability-standards-from-iso/ Koiwanit, Jarotwan. 2018a. Analysis of Environmental Impacts of Drone Delivery on an Online Shopping System. Advances in Climate Change Research 9 (3): 201–207. https://www.sciencedirect.com/science/article/pii/S1674927818300261. Koiwanit, J. 2018b. Analysis of Environmental Impacts of Drone Delivery on an Online Shopping System. Advances in Climate Change Research 9 (3): 201–207. https://www.sciencedirect. com/science/article/pii/S1674927818300261. Nguyen, Kaley. 2020. ClearMetal's Data-First Approach & AI Adoption: How It Matters? 21 January 2020. EnvZone. https://www.envzone.com/logistics-and-supply-chain/ clearmetals-data-first-approach-ai-adoption-how-it-matters/.
AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable… 201 Open Data Institute. 2019. Data Trusts: Lessons From Three Pilots (Report). 15 April 2019. https:// theodi.org/article/odi-data-trusts-report/ PwC. 2017. Sizing the Prize: PwC’s Global Artificial Intelligence Study: Exploiting the AI Revolution. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial- intelligence-study.html Ross, Emily, Bill Schaninger, and Emily Seng Yue. 2018. Right-Skilling For Your Future Workforce. McKinsey & Company. 20 August 2018. https://www.mckinsey.com/business-functions/ organization/our-insights/the-organization-blog/right-skilling-for-your-future-workforce Schmidt, M., D. Giovannucci, D. Palekhov, and B. Hansmann, eds. 2019. Sustainable Global Value Chains. Springer. https://www.springer.com/gp/book/9783319148762. Snower, Dennis, Paul Twomey, and Maria Farrell. 2020. Revisiting Digital Governance. Social Macroeconomics Working Paper Series (SM-WP-2020-003). Oxford University. September 2020. https://www.bsg.ox.ac.uk/sites/default/files/2020-10/SM-WP-2020-003%20 Revisiting%20digital%20governance_0.pdf Sotelo, Jimena, and Ziyang Fan. 2020. Mapping TradeTech: Trade in the Fourth Industrial Revolution. World Economic Forum. December 2020. http://www3.weforum.org/docs/WEF_ Mapping_TradeTech_2020.pdf Stephenson, Matthew. 2020. Digital FDI: Policies, Regulations and Measures to Attract FDI in the Digital Economy. World Economic Forum White Paper. September 2020. https://www. weforum.org/whitepapers/digital-fdi-policies-regulations-and-measures-to-attract-fdi-in-the- digital-economy United Nations Conference on Trade and Development (UNCTAD). 2017. World Investment Report 2017: Investment and the Digital Economy. https://investmentpolicy.unctad.org/publications/174/ world-investment-report-2017%2D%2D-investment-and-the-digital-economy United States Nuclear Regulatory Commission. 2021. Performance-Based Regulation. 9 March, 2021. https://www.nrc.gov/reading-rm/basic-ref/glossary/performance-based-regulation.html World Bank Group. 2016. World Development Report 2016: Digital Dividends. World Bank Publications. https://www.worldbank.org/en/publication/wdr2016. _______. 2013. Introducing a risk-based approach to regulate businesses How to build a risk matrix to classify enterprises or activities. Investment Climate 90754, September 2013. https:// documents1.worldbank.org/curated/en/102431468152704305/pdf/907540BRI0Box30d0appr oach0Sept02013.pdf _______. Global Indicators of Regulatory Governance: Worldwide Practices of Regulatory Impact Assessments. http://documents1.worldbank.org/curated/en/905611520284525814/ Global-Indicators-of-Regulatory-Governance-Worldwide-Practices-of-Regulatory-Impact- Assessments.pdf World Economic Forum. 2018. Internet of Things Guidelines for Sustainability. January 2018, p.3. http://www3.weforum.org/docs/IoTGuidelinesforSustainability.pdf. _______. 2020. Data Free Flow with Trust (DFFT): Paths Towards Free and Trusted Data Flows. May 2020. http://www3.weforum.org/docs/WEF_Paths_Towards_Free_and_Trusted_ Data%20_Flows_2020.pdf World Economic Forum and McKinsey. 2019. Data Collaboration for the Common Good Enabling Trust and Innovation Through Public-Private Partnerships. April 2019. http://www3.weforum. org/docs/WEF_Data_Collaboration_for_the_Common_Good.pdf World Wide Web Foundation. 2018. Open Data Barometer. September 2018. Yarime, Masaru. 2020. Governing Data-Driven Innovation for Sustainability: Opportunities and Challenges of Regulatory Sandboxes for Smart Cities. in Artificial Intelligence for Social Good, Published by Keio University and the Association of Pacific Rim Universities. https:// apru.org/wp-content/uploads/2020/09/layout_v3_web_page.pdf Zafrir, Nadav. 2020. Beyond Trust: Why We Need A Paradigm Shift in Data-Sharing. World Economic Forum. 17 January 2020. https://www.weforum.org/agenda/2020/01/ new-paradigm-data-sharing Zarkadakis, George. 2020. ‘Data Trusts’ Could Be the Key to Better AI. Harvard Business Review. 10 November 2020. https://hbr.org/2020/11/data-trusts-could-be-the-key-to-better-ai
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal Approaches Sep Pashang and Olaf Weber
Abstract Artificial intelligence (AI) for sustainable finance has been increasingly employed over the past several years to address the sustainable development goals (SDGs). Two major approaches have emerged: institutional and societal AI for sustainable finance. Broadly described, institutional AI for sustainable finance is used for activities such as environmental, social and governance (ESG) investing, while societal AI for sustainable finance is used to support underbanked and unbanked individuals through financial inclusion initiatives. Despite the growing reliance on such digital tools, particularly during the coronavirus disease 2019 (COVID-19) pandemic, governance mechanisms and regulatory frameworks remain fragmented and underutilized or inhibit progress toward the 17 UN SDGs. While major proposals and reports were released by standard-setting and regulatory bodies leading up to 2020, the COVID-19 pandemic indeed caused major setbacks to adoption and implementation, which in turn have also resulted in inconclusive data and lessons learned. As the global community begins to navigate out of the pandemic, policy makers, through multilateral and cross-sector agreements, are looking to renew governance mechanisms that mitigate new and pre-existing risks while cultivating sustainability and facilitating innovation. Keywords Artificial intelligence · Sustainable finance · Fintech · ESG · Financial inclusion · Governance
S. Pashang (*) · O. Weber School of Environment, Resources and Sustainability, University of Waterloo, Waterloo, Canada e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_12
203
204
S. Pashang and O. Weber
1 Introduction Artificial intelligence (AI) in financial services has become integrated into our global social fabric, particularly with the advent of the COVID-19 pandemic in 2020. While such financial technology (fintech) applications have undoubtedly changed the way money is spent, borrowed, invested and saved—at various junctures of the financial system, they are increasingly being used to address sustainable development, which will be referred to in this paper as “AI for sustainable finance”. Fintech emerged after the 2008 global financial crash (GFC) and has rapidly evolved into a commercial and mainstream service offering since 2018. The Financial Stability Board (FSB) defines fintech as “technologically enabled innovation in financial services that could result in new business models, applications, processes or products with an associated material effect on financial markets and institutions and the provision of financial services” (FSB 2021). Put simply, fintech includes digital innovations used for financial services. While the evolution of fintech began with start-ups addressing intermediation gaps left by the formal banking sector (Aaron et al. 2017), today starts-ups, challenger digital banks, government agencies and incumbent banks use various subsets of AI (i.e., machine learning, natural language processing, deep learning) to also address socially responsible investing (SRI) and financial inclusion—in hopes of progressing the 17 SDGs approved by the United Nations. Despite the growing governance action and literature regarding AI and how it relates to formal financial systems (globally and regionally), governance mechanisms for AI for sustainable finance need to be urgently identified as studies are rare and the lack of agreed measures could contribute to fragmentation in future policy outcomes. To bring together a coherent conceptualization of AI for sustainable finance, this chapter develops a definition: AI that embeds social and environmental inclusion, ethics and collaboration into its design, development and implementation to accelerate sustainable development. Only in the past few years have cross-sector partnerships and multilateral discussions between central banks, standard-setting bodies and policy makers inspired governance frameworks and recommendations (e.g. the Bali Fintech Agenda and the Maya Declaration on Financial Inclusion) that advance people and planet, and not solely profit. On the one hand, AI for sustainable finance has been shown to unlock or enable efficiencies for various actors or industries, while on the other, it has been shown to inhibit progress toward sustainable development by presenting new or existing unintended consequences. Without governance and regulatory frameworks in place, such innovations may threaten the viability of modern financial systems and the livelihoods of the actors that contribute to them (Castilla-Rubio et al. 2016). With this in mind, the ongoing debate surrounding sustainable development challenges (e.g. energy consumption, e-waste, privacy and predatory issues, gender bias and racialization) related to regulations, ethics and particularly governance is mounting. In response, intergovernmental organizations, central banks and
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
205
regulatory bodies have had to carefully, yet expeditiously, adapt to the evolving ecosystem—mitigating risk through robust regulatory measures while cultivating sustainability and facilitating innovation. To unpack these implications, this chapter addresses AI for sustainable finance using the following structure. The first section examines the SDGs and how AI for sustainable finance can achieve them. Technology utilization to improve social and environmental outcomes during the Fourth Industrial Revolution is well under way. In only a few years, AI for sustainable finance has evolved from historical data analysis to real- time information and recently to predictive modelling (International Telecommunication Union [ITU] 2018). The second section considers two vantage points related to AI for sustainable finance. The first observes AI for sustainable finance at the institutional level, in the context of ESG investing. Specifically, in developed markets, firms use subsets of AI and big data (e.g. stock prices, ESG risk data, public sentiment) to provide investors with sustainability insights. The second vantage point relates to AI for sustainable finance at the societal level, in the context of digital financial inclusion. Emerging and frontier market actors have integrated adjacent industries to bridge the gap between unbanked (and underbanked) populations and the financial system, serving vulnerable individuals and small businesses that historically have not had equitable access to financial and/or technology resources and literacy (Cantú and Ulloa 2020). The third section discusses the COVID-19 pandemic and the unique challenges and opportunities it has presented in the context of AI for sustainable finance. For instance, in late March 2020, the Bank of Canada (BoC) suggested that “During this time of heightened public health measures intended to limit the transmission of COVID-19, some consumers and businesses are choosing not to use cash to limit potential exposure” (Carmichael 2020, para. 6). Current trends indicate an increased acceptance of digital tools and digital identity, and consideration of digital currencies (Carmichael 2020; Cheung n.d.). As nations and institutions look to AI for sustainable finance to address pandemic-related circumstances, the SDGs could serve as a guidepost to accelerate innovation while confronting practices that may be exclusionary or pose unintentional consequences.
2 AI and the SDGs This paper adopts the following definition of sustainable development by David Griggs et al. (2013, 2): “Development that meets the needs of the present while safeguarding earth’s life-support system, on which the welfare of current and future generations depends”. In 2015, the United Nations introduced 17 SDGs as a framework to address global challenges such as poverty, climate change and numerous inequities by 2030 (United Nations 2019a). When the SDGs were agreed to, it was stated that data and technology could unlock the potential not only to monitor progress toward sustainable development as once traditionally used but also, more
206
S. Pashang and O. Weber
importantly, to actively contribute through evidence-based policies and programs (UN Global Pulse and GSMA 2017). This was followed in 2016 by the likes of the Group of Twenty (G20), which included sustainable digital finance as one of its 2030 work streams, and the United Nations Environment Programme, which published recommendations in its Fintech and Sustainable Development: Assessing the Implications report (Macchiavello and Siri 2020; Blakstad and Allen 2018). Meeting the SDGs will require action on several technological fronts, including better understanding the potential of digital innovations. For AI for sustainable finance to support sustainable development, it must focus not only on the perceived benefits as imagined by those who develop them but also how the technologies (and associated benefits) are accessible, are useful and can be integrated into local contexts that vary economically, politically and culturally (especially by the poorest or most vulnerable) (Arthur 2009). On the one hand, AI for sustainable finance has been utilized to improve the quality of life for developing nations and enable greater access to basic human amenities for their populations. On the other hand, AI for sustainable finance is often not regulated by conventional financial regulators and might have negative effects on financial markets or exclude those without access. Historically, innovation has been promoted through public and private mechanisms, operated only by a few developed countries and international bodies (Nelson 1993). These efforts have succeeded, to some degree, in fulfilling global sustainability needs but have fallen short of advancing sustainable development (Juma and Yee-Cheong 2005; InterAcademy Council 2004; Harvard Kennedy School n.d.). Addressing these gaps requires effective cross-sector partnerships between municipal, federal and international actors and input from end users (recipients and local stakeholders) contributing to the process. Within the global innovation system, the difficulties of utilizing technological innovation for sustainable development have been addressed in a variety of ways, such as through financing, formation of research networks, setting priorities, international aid and trade agreements and action research feedback loops connecting end users and innovators (Harvard Kennedy School n.d.). To some degree, these interventions have altered institutional norms and configurations over the past few years, yet they are poorly described in the literature. Little is known beyond their respective fields, making it difficult to contribute to enhancing AI for sustainable finance in practice and scholarly discourse.
3 The Promise of AI for Sustainable Finance 3.1 A Brief History The GFC of 2008 and its aftermath caused enormous turmoil and led to an extended period of low growth and instability across the international political economy (Castilla-Rubio et al. 2016). This crisis originated from exorbitant risk-taking by US banks on subprime mortgages, which burst the housing bubble, triggered the collapse of the banking sector and led to an unprecedented “credit crunch” around
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
207
the world (Flammer and Ioannou 2020). As a result, numerous governance and regulatory measures infused by the G20 were implemented to reshape the global financial system. After the devastating impacts the GFC had on people and planet, investors and stakeholders turned to sustainable finance (e.g. ESG investing) in efforts to interrogate nonfinancial criteria related to climate change, environmental disasters, poor corporate governance and investment risks each of these posed (Townsend 2020). At the same time, financial inclusion initiatives were established by G20 leaders (e.g. the Financial Inclusion Experts Group, Global Partnership for Financial Inclusion [GPFI]); central banks of emerging markets (e.g. the Alliance for Financial Inclusion and its release of the Maya Declaration on Financial Inclusion); and the United Nations (e.g. the Task Force on Digital Financing of the Sustainable Development Goals), to name a few (Arner et al. 2020). While AI for sustainable finance is relatively new in the literature and in practice, technology utilization to improve social and environmental outcomes is not. Upon reviewing the literature, information and communications technology (ICT) was first introduced in the literature (for example, Cornish 1982; Melody and Mansell 1986; Nooteboom 1992) in the 1980s to represent technologies such as telephone networks, computer networks, television and radio. In the sustainable development field, the most widely used reference to technology is “ICT for development”, a term that was also used in 2000 for the UN Millennium Development Goals (ITU 2015). With advancements and variance in digital innovations, the term ICT no longer accurately describes the field as it once did and thus must be revisited. The authors posit that “ICT for good” serves as an umbrella term for newer fields such as AI for good (Clopath et al. 2019; Rolnick et al. 2019; Taddeo and Floridi 2018); fintech for good (Arner et al. 2020; Alexander et al. 2017); blockchain for good (Sylvester 2019; Kewell et al. 2017; Aganaba-Jeanty et al. 2017); and big data for good (Marsden and Wilkinson 2018; Initiative for Global Environmental Leadership 2014; Maaroof 2015), in both academic and industry journals. To this end, stakeholders must be cautiously optimistic about advancing AI’s remarkable depth, power and speed in their efforts to accelerate sustainable development.
3.2 Recent Governance Responses Cross-sector partnerships and multilateral efforts by bodies such as the FSB, Bank for International Settlements, the G20, Organisation for Economic Co-operation and Development (OECD) and numerous UN agencies have made some progress. Figure 1 depicts a process recently introduced by the World Bank Group, offering guidance on regulatory approaches toward fintech (World Bank Group 2020). Despite such efforts, global adoption and implementation to integrate such frameworks are largely missing (Fay 2019). This trend is also evident across developed markets such as Canada and other G20 members (e.g. China, the European Union, India and the United States), where regulatory bodies are still working to investigate and implement modifications.
208
S. Pashang and O. Weber
Fig. 1 Process to identify regulatory approaches and policy responses toward fintech. (Source: World Bank Group 2020)
In 2018, the World Bank and the International Monetary Fund (IMF) launched the Bali Fintech Agenda paper, which proposed a framework on high-level fintech issues that countries should consider in their domestic policy discussions (World Bank Group and IMF 2018). The report presented 12 policy proposals that cover issues related to enabling fintech, ensuring financial sector resilience, addressing risks, financial inclusion and promoting international cooperation. While global cross-sector agreements such as the Bali Fintech Agenda have offered blueprints for AI for sustainable finance, it is not clear where member nations stand relative to these proposals presently. The pervasiveness of the COVID-19 pandemic has since caused reprioritization and major setbacks to such governance implementations, which in turn have resulted in inconclusive data and lessons learned. The last known review of country responses was carried out by the World Bank and IMF in 2019 (IMF 2019). Findings from the report included three major themes. First, common in nearly all regions are critical infrastructural and regulatory gaps (ibid.). Second, monitoring of entities and activities is still confined within conventional regulatory parameters (ibid.). Third, legal frameworks to address issues are widely missing (ibid.). In its regional overview, the report highlighted the following: Africa has experienced rapid growth of mobile money in a push toward increased financial inclusion, but differences in regulatory approaches are noticeable and reactive to the pace of change (ibid.). East Asia has made significant advances in all major aspects of fintech. To keep up with this pace, regulators have established fintech units and regulatory “sandboxes” to respond to various risks (e.g. consumer and investor protection concerns, financial stability and integrity) (ibid.). Entities utilize fintech
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
209
sandboxes to test solutions in controlled environments to expose potential risks and benefits. Figure 2 shows the various phases of a fintech sandbox life cycle (World Bank Group 2020). The European market is also rapidly growing but is distributed unevenly. While the European Union has enforced two major regulations (the General Data Privacy Regulation and Payments Services Directive 2) in 2018, their implications are yet to be seen. In West Asia, Central Asia and North Africa, adoption and progress are gradual, with concentration of activities only in a few countries and sectors. Regulatory responses vary widely across the Americas, with Latin American and Caribbean nations still trailing behind Canada and the United States. While some major AI for sustainable finance advancements have been made in Canada, very little has followed with regard to governance and policy, and agreed- upon frameworks around their functions are ad hoc, incomplete and insufficient. In Canada, there is no single federal or provincial regulatory body that has jurisdiction
Fig. 2 A typical sandbox life cycle. (Source: World Bank Group 2020. Note: AML/CFT anti- money laundering/combatting the financing of terrorism)
210
S. Pashang and O. Weber
over such firms. Instead, regulations are dependent on the types of services being offered by such firms (Global Legal Group 2021). This notion of light-touch regulation has some concerned about bad behaviour by firms, renewing fears of a GFC- like scenario (Fay 2019). Canadian regulators such as the Department of Finance, the Competition Bureau and some provincial agencies have made attempts at developing a fintech regulatory framework (Global Legal Group 2021). The Ontario Securities Commission, the Autorité des marchés financiers in Quebec and the Canadian Securities Administrators are currently utilizing fintech sandboxes to experiment with various solutions (Canadian Bankers Association 2018). Separately, the federal government in its 2018 Budget Implementation Act, Bill C-74, introduced changes (e.g. the Bank Act, Trust and Loan Companies Act and Insurance Companies Act) in favour of fintech to provide financial institutions with new abilities (ibid.). What follows is an account of how these factors correspond to AI for sustainable finance in institutional and societal scenarios.
4 Institutional and Societal Approaches Three major AI for sustainable finance approaches have emerged related to achieving the SDGs. The first is at the institutional level and involves redirecting the allocation of existing financial resources toward activities such as ESG investing. The second is at the societal level and includes the expansion of financial resources through financial inclusion to support the SDGs. The third is at the regulatory level and uses technology (regulatory technology or “regtech”) to (re)design enhanced financial governance systems (Arner et al. 2020). The following explores the first two approaches, which are central to the focus of this chapter.
4.1 ESG Investing It is widely studied that SRI can support climate action (e.g. Eccles et al. 2014; Geobey et al. 2012; Weber and Feltmate 2016). The thirteenth SDG aims to “take urgent action to combat climate change and its impacts” by integrating measures into national policies and institutional capacity building (UN 2019a). This section explores whether and how AI for sustainable finance could be used by ESG data firms that provide investors with nonfinancial performance information. Sustainable finance and AI are both major policy areas concerning stakeholders across sectors, exemplified by numerous initiatives by researchers and policy makers across G20 member states, the United Nations and the European Commission (Arner et al. 2020). Despite this, a paucity still exists in how they interact and whether additional governance and regulatory considerations are necessary. This was the case with the European Commission’s Sustainable Finance Action Plan, which made no mention of AI or fintech (Arner et al. 2020). Further, despite the
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
211
growing availability of computational resources within financial institutions and the emergence of fintech more than a decade ago, existing solutions have only recently evolved to correspond with the growing interest in SRI and the abundance of big data related to ESG (Monteleoni et al. 2013; Weber and Feltmate 2016). Pre-existing complexities in the ESG domain have, for some time, prompted stakeholders to demand alternative ESG data. For instance, Robert G. Eccles and Judith Stroehle (2018) stated that despite the growing appetite for data and empirical evidence showing a correlation between ESG performance and financial outcomes, the field remains unorganized and without universally agreed-upon standards (Eccles et al. 2014; Khan et al. 2016). With more than 100 data providers (e.g. Vigeo Eiris, KLD, MSCI, ISS-oekom, Sustainalytics, Morningstar) in the ESG ecosystem, their incomplete efforts to standardize metrics, indicators and methods have created a variance in ratings and recommendations that confuse and misinform investors and undermine the soundness of ESG disclosure (Eccles and Stroehle 2018; GISR 2018). Further, conventional ESG providers struggle in three major ways: first, ESG data is mainly sourced from company disclosure materials; second, ESG scores and data are typically a year old; and third, there are discrepancies and a lack of standardization among data providers (Malinak et al. 2018; Folger-Laronde et al. 2020a, b). Thus, some investors have turned to AI-driven ESG firms that consume big data and apply subsets of AI such as machine learning and natural language processing (NLP). Such tools are currently being used by asset managers, asset owners and quantitative managers who seek real-time alternative ESG data and analytics to support their clients’ investing needs. Figure 3 shows venture capital funding in institutional fintech since 2010 (Mastercard 2020). In 2013, TruValue Labs, one of the first AI-driven ESG data providers, was founded (TruValue Labs 2020). TruValue Labs analyses public sentiment from alternative sources such as news media, think tanks, social media, non-governmental organizations (NGOs) and academic journals related to company ESG performance (Serafeim 2020). Specifically, TruValue Labs uses AI to analyse unstructured big data from more than 100,000 sources, such as analyst reports, news and social media and government sources, and incorporates the Sustainability Accounting Standards Board’s 30 materiality classifications to generate scores (0–100) (ibid.). It is noted that transparency and validation are provided to the user by enabling them to track the source of information that informs the sentiment analysis. For instance, a drilling company could receive positive sentiment following news of their investment to improve waste and hazardous materials management, materials sourcing and product safety. Facebook, on the other hand, could receive negative sentiment due to exposure to data privacy issues, concerns about regulatory pressure and user rights (ibid.). It has been reported that TruValue Labs’s sentiment analysis can also codify the degrees of positivity or negativity, instead of just the conventional binary approach: positive versus negative sentiment. According to Serafeim (ibid.), AI will make attempts to assign a more negative score to an event such as an oil spill that harms several people or communities and a less negative score to an event that causes minor injuries to one person.
212
S. Pashang and O. Weber
Fig. 3 Venture capital activity in fintech and sustainability. (Source: Mastercard 2020. www.mastercard.com/news/media/bz5nmfg4/mastercard_start_path__pitchbook_fintech_for_good_report. pdf. Note: *As of October 28, 2020)
4.1.1 Related Governance Challenges With the rising demand for AI for Good offerings, governance mechanisms must confront the duality of what is considered “good”. While AI-driven ESG solutions can be useful to investors when evaluating a firm’s sustainability activities, it is not clear whether the algorithms that power such solutions have considered ethics, inclusion and environmental factors that could potentially compromise progress toward the SDGs. A recent study (Vinuesa et al. 2020) published in Nature revealed that while AI enabled the accomplishment of 134 SDG targets, it inhibited the progress of 59. The study indicated that failure to enforce governance and regulatory oversight for AI for sustainable development could result in negative societal and environmental implications (ibid.). From an ethics and inclusion perspective, key aspects that require governance attention include transparency, equity, auditability and accountability. For instance, different algorithms that process the same raw data may ultimately produce different outcomes, which may have discriminatory, exclusionary and exploitative implications (Ehrentraud et al. 2020). A recent study surveyed numerous jurisdictions and found that none enforced any regulatory requirements for financial institutions that employ AI (ibid.). Another growing subdomain of AI ethics is sustainable AI,
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
213
which confronts whether AI itself is environmentally sustainable when considering computing power and energy consumption required for training AI (van Wynsberghe 2021). For instance, Strubell et al. (2019) showed that training a single NLP model, which uses deep learning, could produce the same amount of carbon dioxide (CO2) (around 600,000 pounds) as five cars over the cars’ entire lifespan (van Wynsberghe 2021). Thus, policy makers must continue refining regulation and legislation standards that address these ethical considerations. To address some of these challenges, nations such as Singapore have released frameworks to promote AI fairness, ethics, accountability and transparency, while the Netherlands promotes soundness, accountability, fairness, ethics, skills and transparency (Ehrentraud et al. 2020). The Canadian Institute for Advanced Research’s Pan-Canadian AI Strategy (appointed by the federal government in 2017) has been working to develop the world’s first national AI strategy, including a work stream titled “AI & Society” (Canadian Institute for Advanced Research n.d.). Other nations, particularly in the G20, also have efforts under way that look to expose and address negative implications for society (Ehrentraud et al. 2020).
4.2 Financial Inclusion The 2030 Agenda for Sustainable Development recognizes that poverty is the greatest global challenge and its eradication is a requirement for sustainable development. The first SDG aims to “end poverty in all its forms everywhere”1 and pushes for robust protection systems and spending on primary services to help individuals escape poverty. This section explores whether and how AI could help promote an inclusive digital economy that provides financial services to the unbanked (those who have no bank account or transactions through a mobile money provider) and underserved individuals living in poverty. Around 700 million people today live on less than $2 per day and 1.3 billion people are multidimensionally poor (United Nations Development Programme 2019). Some priority areas and associated targets include reducing poverty by 50% (by 2030), improving access to sustainable livelihoods and entrepreneurial opportunities, empowering people living in poverty with support systems and addressing the disproportionate impact of poverty on women (United Nations 2019b). While extreme poverty has declined, this trend has slowed, and the United Nations warns that we are not on track to achieve its 2030 global target (less than 3% living in extreme poverty) (ibid.). The COVID-19 pandemic has further exacerbated circumstances for the most vulnerable. Since 2020, the following trends have been observed: global poverty (SDG 1)2 has increased for the first time in decades; inequalities and dangers that women and girls face have increased
See https://sdgs.un.org/goals/goal1 Ibid.
1 2
214
S. Pashang and O. Weber
(SDG 5)3; the world is facing the worst economic recession since the great recession (SDG 8)4; and investment in fossil fuels remains higher than in climate action (SDG 13)5 (United Nations n.d.). Financial inclusion is one of the UN Global Compact categories in which the financial sector can play a role in addressing the SDGs, with about 1.7 billion people remaining unbanked (Demirgüç-Kunt et al. 2017; Weber 2018). The United Nations states that to eradicate poverty by 2030, “affordable technological solutions have to be developed and disseminated widely” (United Nations Development Programme 2019, para. 2). The role of technology, concerning financial inclusion, has been discussed by stakeholders after the onset of the GFC. In 2008, policy makers established the Alliance for Financial Inclusion while G20 leaders endorsed a Financial Inclusion Action Plan at the Seoul Summit in 2010 and created the GPFI (Gabor and Brooks 2017). In 2015, the United Nations emphasized financial inclusion in multiple SDGs (numbers 1, 5 and 10) and noted the value of technology in accelerating them (Greenvest and United Nations Environment Programme 2017). In 2018, a collaboration between the IMF and World Bank gave rise to the Bali Fintech Agenda, which established a broad road map to appropriately implementing digital financial inclusion (Sahay et al. 2020). In the Global South (e.g. China, Ghana, India, Kenya, Myanmar, Peru and Uganda), AI for sustainable finance has also been advanced by governments, mobile money networks and NGOs to help address the needs of individuals who are generally unbanked or experiencing poverty. Offerings include income and liquidity support, filing tax returns, flexible loan repayments, lower transaction costs and increased transaction limits, which are helping shift away from conventional financial service practices (ibid.). AI for sustainable finance firms such as CreditVidya6 and Zest Finance use alternative data such as “digital fingerprinting” captured from an individual’s device, browser and social media activity to predict creditworthiness (Zest AI 2020). In Kenya, M-Shwari (Bharadwaj and Suri 2020) uses a mobile money system (M-Pesa) to incorporate phone history in its assessment of credit risk. With 20% of adults (37 million users) in Kenya actively using this service, M-Shwari is seen by some as a financial inclusion success story (Bharadwaj and Suri 2020; Cantú and Ulloa 2020). The service incorporates predictive algorithms and AI to analyse social and telecom data to assess creditworthiness. Within a few minutes, a credit score is produced, offering the terms of the loan (Bharadwaj and Suri 2020). On a macro level, insights about the economic health and resilience of a community can also be extrapolated from the use of mobile financial services, monthly airtime top-up patterns and the purchase of value-added services (United Nations Development Programme 2019). Despite their potential to contribute to the
See https://sdgs.un.org/goals/goal5 See https://sdgs.un.org/goals/goal8 5 See https://sdgs.un.org/goals/goal13 6 See https://creditvidya.com/how-it-works 3 4
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
215
SDGs, these examples (such as institutional ones noted above) must be approached with great caution due to risks related to data security, accountability and bias. With regard to global remittances, recorded annual flows in 2018 to low- and middle-income nations reached US$529 billion (a 9.6% increase since 2017) (World Bank 2019). Conventional transactions pose barriers such as high fees, lack of traceability and beneficiaries who lack formal identification or bank accounts (ibid.). To address this, AI for sustainable finance related to remittance transactions may remove such constraints by ensuring transparency of inflows, directing remittances toward socially responsible purchases, offering cheaper transaction fees (a reduction from ten to three percent), securing the privacy of individuals and creating digital IDs that can be used for other money transfers (United Nations Development Programme 2018). AI for sustainable finance is also being used to provide unbanked individuals with insurance rates for farming, credit scores and loans through consent-based alternative data sources such as digital (email, social media and mobile transactions), behavioural and psychometrics. Despite much progress, governance mechanisms are necessary to ensure such initiatives address inclusion, ethics and collaboration in their design, development and implementation. 4.2.1 Related Governance Challenges While anecdotal indications seem to show great potential for AI for sustainable finance when considering financial inclusion, risks and unintended consequences have been hard to quantify and are loosely studied. In order for AI to best serve financial inclusion, “exclusive inclusion” must be addressed. Broadly defined, exclusive inclusion is the deliberate or unintentional practice of “including” or aiding particular groups of people while knowingly or unknowingly excluding others. The concept can also refer to providing services that (from the perspective of the provider) seem to address recipients’ needs while overlooking or ignoring their other interconnected needs. Often, such practices worsen pre-existing risks or trigger new ones. For instance, AI for sustainable finance has the potential to close gender gaps and ensure women (currently one billion are unbanked) are not left behind; however, special attention needs to be paid to pre-existing barriers for women such as access to technology (smartphones and internet access), cultural and social norms and digital and financial literacy (D’Silva et al. 2019; Sahay et al. 2020). Undocumented individuals (particularly women) could face even more risks and complexities. This is important given AI for sustainable finance is often the only viable option for many refugees who are seeking loans. Further, as the spread of credit has increased from Global North countries to such individuals, it has resulted in uneven distribution of credit access and livelihood support, since some (e.g. entrepreneurs) are deemed worthy of loans while others experience further exclusion (Bhagat and Roderick 2020). Critics of such approaches suggest that such options are an extension of financialization and situate marginalized people as recipients of unregulated financial services through technology (Gabor and Brooks 2017).
216
S. Pashang and O. Weber
To cultivate dignity, agency and benefit to underbanked and unbanked individuals, efforts should be made by AI for sustainable finance firms to include recipients in the design, development, implementation and feedback phases. The principle of sankofa, derived from the Akan people of Ghana, illustrates this mindset (Temple 2010). It states that to collectively shape and inform the future, we must look back and recognize the past, or anything about us, done without us, does nothing for us (ibid.). This concept underscores the importance of clear and effective regulatory oversight and governance frameworks and agreed-upon metrics for monitoring. While some work has been carried out by the likes of the OECD, the G20 and the ITU, these efforts must be broadened to reflect the diversity of global contexts to generate buy-in and participation by stakeholders (UN Secretary-General’s High- level Panel on Digital Cooperation 2019).
5 Navigating Through a Pandemic 5.1 COVID-19: A “Natural Experiment” The COVID-19 pandemic is the most devastating and pervasive challenge in modern history. This global emergency has been classified as a “mega-crisis” or a system that consists of numerous crises, each with interconnected parts, drivers and consequences (Pashang 2020). Almost 2 years have gone by since cases of COVID-19 first appeared in Wuhan, China. Despite recent vaccination programs, more than 220 million cases and more than five million deaths have been confirmed worldwide, and the pandemic continues to spread havoc (World Health Organization n.d.). Due to physical distancing and lockdown measures resulting from the pandemic, financial services designed around cash and in-person interactions to open accounts, determine creditworthiness or provide financial literacy significantly shifted to contactless and cashless transactions, deployment of government support measures and lending (Sahay et al. 2020). Fintech has evolved from spending to lending to fill existing gaps within traditional financial services (ibid.). The global demand for fintech services increased dramatically during the pandemic, particularly in response to the varying severity of lockdown restrictions enforced across regions. A major cross-sector study analysed 1385 fintech firms across 169 countries and found that services in markets with more stringent lockdown restrictions reported larger growth in volume and number of transactions (Cambridge Centre for Alternative Finance, World Bank and World Economic Forum 2020). Figure 4 illustrates that fintech firms situated in regions with the highest stringency measures reported 50% more volume and transactions (year-on-year Q1 to Q2) than those in the lowest quantile (ibid.). In many parts of the world, fintech has supported individuals and businesses through challenges caused by the pandemic. For instance, small- and medium-sized enterprises (SMEs) in Latin America that were in need of relief were able to access
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal… 16%
14%
14%
15%
12%
12% 10%
217
9%
10%
10%
8% 6% 4% 2% 0%
Low Stringency (n.229)
Medium Stringency (n.707) Transaction Volumes
High Stringency (n.397)
Number of Transactions
Source: Cambridge Centre for Alternative Finance, World Bank and World Economic Forum (2020). Fig. 4 Transaction volumes and number of transactions under low, medium and high COVID-19 lockdown stringencies, all fintech verticals (% change, year-on-year Q1–Q2). (Source: Cambridge Centre for Alternative Finance, World Bank and World Economic Forum 2020)
Fig. 5 Implementation or delivery partner in COVID-19-related relief measures of schemes, all fintech verticals (% of respondents). (Source: Cambridge Centre for Alternative Finance, World Bank and World Economic Forum (2020). Note: “N/A” and “No, not interested” responses have been omitted)
government transfers through digital disbursements (Cantú and Ulloa 2020). Through the mobile app of a state-owned bank, the federal government in Brazil was able to increase access for unbanked and underbanked individuals to receive aid. Similar occurrences took place in Peru and Argentina via municipalities, while in Mexico, fintech firms applied alternative credit rating technology to provide loans (approved in 24 h) at a lower cost to SMEs. Figure 5 highlights areas where AI for sustainable finance played a role in supporting governments around the world with pandemic relief measures (Cambridge Centre for Alternative Finance, World Bank and World Economic Forum 2020). In Canada, the pandemic accelerated the digitalization of the economy and reignited debate about the future of cash and banking. Before the pandemic, the BoC
218
S. Pashang and O. Weber
had piloted Project Jasper, one of the most comprehensive crypto-based central bank digital currencies in the world (IMF 2019; FSB 2017). Less than a year after the onset of the pandemic, with growing hesitancy among consumers about using cash, BoC Deputy Governor Timothy Lane stated that “if we want to be ready to develop any kind of digital central bank product, we need to move faster than we thought was going to be necessary” (Gordon 2020, para. 4). For central banks in emerging and frontier markets, financial inclusion has been among the main reasons for exploring cryptocurrencies such as stablecoin (Bank for International Settlements 2020). 5.1.1 Related Governance Challenges The FSB has indicated that fintech does not yet (by itself) pose significant risks (Restoy 2019; Sahay et al. 2020). From a macroeconomic perspective, given appropriate regulations are in place, AI for sustainable finance may offer positive outcomes by enabling greater portions of the population to participate in formal economic activity. This was supported by the IMF, which suggested AI for sustainable finance has the potential to enhance the efficacy of post-pandemic macroeconomic policies, when considering income creation and employment (Sahay et al. 2020). Notwithstanding these opportunities, it is not yet understood whether or how such opportunities could instead exacerbate pre-existing and/or new risks to those they intend to serve. Looking to prior examples, the rapid development of various fintech has resulted in structural unintended consequences, leading to a spike in predatory lending practices and financing terrorism and corruption (Orol 2018). In 2020, such practices have already been observed in Indonesia, where the Financial Services Authority shut down more than 1000 unlicensed digital lenders that offered prohibited services and employed contentious debt collection approaches (Faux 2020; Sahay et al. 2020). These trends could intensify during the pandemic given that millions of people have faced sudden job loss and unemployment. To mitigate these risks, there is a need for cross-sector partnerships at both the domestic and international levels for policy development (Sahay et al. 2020). Stringent lockdown restrictions have also increased the overreliance on AI for sustainable finance, which may lead to unintentional harms that foster exclusive inclusion. Due to the online-only nature of digital services, individuals without technological accessibility or literacy may be discriminated against and excluded. Unequal access to digital infrastructure, potential biases in data analytics and modelling and lack of access to technology (e.g. smartphones, computers and the internet) could also lead to new forms of exclusion if there is a strong drive toward digital financial services during and after the pandemic (ibid.). Further, the pandemic could restrict already marginalized groups such as women, the elderly, those with disabilities, non-status migrants and those living in remote communities (UN Secretary-General’s High-level Panel on Digital Cooperation 2019). Additionally, those experiencing homelessness, trafficked individuals (whose finances may be
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
219
controlled or surveilled) and incarcerated individuals (who are forbidden to use electronic devices) would likely be excluded in a cashless society (Engert et al. 2018; Choi et al. 2021). The COVID-19 pandemic is the first “natural experiment” event or test of resilience of AI for sustainable finance. To evolve governance objectives, cross-sector partnerships and multilateral activities must continue to be explored, including regulatory sandboxes that expose potential risks and benefits. Seminal reports such as the Bali Fintech Agenda have offered frameworks for AI for sustainable finance in the past; however, there still are no internationally agreed regulatory standards. The pervasiveness of the pandemic may very well have led to the reallocation of resources and priorities related to these ambitions. The silver lining is that, as the global community endeavours to navigate out of this natural experiment, AI for sustainable finance has once again answered the call to serve people and planet in times of crisis.
6 Key Policy Considerations While numerous journal papers, policy reports and grey literature have been published by scholars, governments, standard-setting and regulatory bodies and private sector firms, few have investigated and incorporated findings of governance for AI for sustainable finance, from both societal and institutional vantage points. Although this proposal aligns with and complements earlier important works, including the 2020 report The Promise of Fintech: Financial Inclusion in the Post COVID-19 Era (Sahay et al. 2020), this chapter narrows the focus and disentangles concepts by providing three key policy recommendations when considering AI for sustainable finance. Drawing on findings from the literature, it is recommended that policy makers consider the following: first, mitigate unintended social and environmental consequences; second, promote ESG disclosure; and third, strengthen cross-sector partnerships.
6.1 Mitigate Unintended Social and Environmental Consequences It is necessary to call on national governments, the private sector, intergovernmental organizations and civil society to research, promote and implement AI for sustainable finance policies that respect social inclusion and environmental protection, using the SDGs as a framework. AI for sustainable finance must incorporate an inclusive, ethical and collaborative approach into its design, development and implementation. With the increased dependence on emerging technologies as a solution to development, both social and environmental implications must be considered.
220
S. Pashang and O. Weber
First, inequitable social relations may appear between those who define, control and administer technology for development and the recipients of such solutions (Vinuesa et al. 2020). These inequalities may ultimately violate the SDGs, and, therefore, AI for sustainable finance initiatives should consider who is included and excluded, who benefits and why and how can the marginalized be empowered (Gupta and Vegelin 2016). This entails an in-depth and critical understanding of the challenges faced by the present generations without compromising the livelihoods of future generations (Bansal 2019). Inclusion, feedback and input of end users are necessary ingredients that ensure value, consideration, agency and dignity for unbanked individuals (Dupas et al. 2018). As social, environmental and technological needs and constraints evolve, encouraging feedback from relevant stakeholders is important to ensure that AI for sustainable finance initiatives continue to add value to the user (ibid.). This input ensures that voices and changing circumstances are considered and that resources are effectively allocated to address them (Young 2011). Second, rapid innovation and greater access to technology have unintended consequences on the environment (World Economic Forum 2019). The increased demands for energy that produce and fuel digital technologies have significant impacts on the environment in several ways, including increased resource mining, electricity usage, harmful by-products, fossil fuel consumption and electronic waste (ibid.). The World Economic Forum (ibid.) stated that electronic waste is the fastest growing waste stream globally, reaching 48 million tonnes and worth $62 billion. While much work is to be done, large organizations (“big tech”) have recently started building sustainability programs to reduce and offset these implications (Rolnick et al. 2019). Technology giants such as Google have partnered with NGOs to shift toward circular economies by investing in restorative and regenerative data centres, products and supply chains (Google 2016). Google has been carbon neutral since 2007 and for several years has been matching its energy usage with 100% renewable energy purchases (ibid.). The company has also designed carbon-AI systems to shift heavy computing in their data centres during peak times using wind and solar power, without creating additional demands on electricity. This is part of an ambitious effort to source carbon-free energy on a 24/7 basis (ibid.). Despite this progress, big tech companies such as Google also contribute to climate change. For instance, Google’s AlphaGo Zero AI project generated the same amount (96 tonnes) of CO2 during its 40 days of training as 23 American homes (van Wynsberghe 2021). Not surprisingly, Amazon and Microsoft, despite promoting their sustainability efforts, also release large amounts of CO2 emissions to run their services (Strubell et al. 2019). Recipients or users of AI for sustainable finance should play a role in the design, development and implementation of such innovations. This would ensure that various perspectives are considered equitably, which may increase adoption and enhance livelihoods (Gupta and Vegelin 2016). Predicting the needs of future generations through sustainable development, therefore, is not against generating business wealth but addressing two unique and interrelated criteria: wealth should meet
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
221
people’s basic needs and should be generated within the constraints of the Earth’s productive capacity (Bansal 2019).
6.2 Promote ESG Disclosure Stakeholders and markets are increasingly pressuring corporations, including financial services providers, to disclose details about their socio-ecological impacts via reporting (ElAlfy and Weber 2019). Amran et al. (2014) argue that reporting assists decision-makers, namely, socially responsible investors, in processing environmental, economic and social data. Compared to 1999, there has been an increase in corporations (from 35% to 80% of the top 250 companies of the Global 500) producing reports, especially those who operate in “sensitive” industries (e.g. resource extraction) (ibid.). Reporting quality has and continues to face criticism surrounding the accuracy and transparency of ESG data. This has resulted in greenwashing and organizational biases that prevent concerned stakeholders from making effective and informed investment decisions (Eccles and Stroehle 2018). For instance, organizational leaders can control and disseminate information, withholding information to ultimately influence market performance (ibid.). AI for sustainable finance firms must be held to the same disclosure standards. Reports could include information about a firm’s economic, environmental and social activities so that stakeholders can evaluate motivations, reputation and shortand long-term direction (ElAlfy and Weber 2019). ESG disclosure for AI for sustainable finance providers would also be a vital step forward to demonstrate transparency and effective governance as well as to enhance reputation and accountability. Such topics have recently entered mainstream discourse related to blockchain technology. Tesla CEO Elon Musk, a proponent of cryptocurrency, recently tweeted about Tesla halting the bitcoin as a payment method due to the exorbitant energy consumption of mining. Mining bitcoin is energy-intensive and typically relies on electricity generated by coal. Musk tweeted: “Cryptocurrency is a good idea...but this cannot come at great cost to the environment” (BBC News 2019, para. 6). Soon after, shares plunged by 10%, and at one point a week later, it had dropped by 30% (down to $34,770) (Browne and Kharpal 2021). As a result, investors and public actors have come to know that mining bitcoin consumes more energy (121.36 terawatt-hours/year), and hence produces CO2, than all of Argentina (121 terawatt- hours/year) (Criddle 2021). Musk later signalled that Tesla would consider accepting payment through other cryptocurrencies that were less energy-intensive (Peterseil and Hajric 2021). Subsequently, bitcoin prices surged again when Musk tweeted that he met with the newly formed Bitcoin Mining Council that aims to “promote energy usage transparency & accelerate sustainability initiatives worldwide” (Saylor 2021). The sustainability case for business in this regard has the potential to incentivize fintech firms and investors alike toward ESG practices.
222
S. Pashang and O. Weber
6.3 Strengthen Cross-Sector Partnerships As the 17 SDGs and 169 associated targets are interconnected, the fulfilment of the 2030 Agenda will require sectors (including incumbents, start-ups, regulators and policy makers) to work collectively on financial resources, sharing of knowledge and technology and tackling issues in all countries, especially developing ones (UN Secretary-General’s High-level Panel on Digital Cooperation 2019). To support this aim, the United Nations can serve as a convener to explore the role, configuration and implementation of strategies that apply to AI for sustainable finance initiatives. In both institutional and societal cases described in previous sections, experts from NGOs, the private sector, academia and government must come together to address sustainable development. This should be done with community members and end users contributing to solutions that will affect their livelihoods (Erdiaw- Kwasie and Alam 2016). With this mindset, collaborations would allow each actor to identify and overcome existing gaps more effectively (ibid.). Global innovation systems have conventionally been created by single institutions in the private or public sectors but have fallen short of meeting global targets, especially those addressing issues related to poverty, climate change and associated vulnerabilities (Casillas and Kammen 2010; Eakin et al. 2014; Pinkse and Kolk 2012). Typically, technologies are not developed for markets that do not drive revenue, or when developed, they do not consider the end user’s needs, lowering agency, adoption and efficacy (Anadon et al. 2016). For instance, smaller fintech providers in sub-Saharan Africa eagerly, but hesitantly, partner with larger incumbents as they often face power imbalances and fear that their businesses are at risk (Chetty et al. 2019). During the COVID-19 pandemic, the integration of government digital systems and AI for sustainable finance firms proved effective in providing policy support in the absence of physical human interaction. Therefore, to ensure digital financial inclusion, a fiscal response must work in parallel with digital infrastructure implementation as well as enhance digital and financial literacy. Actors across sectors must strike a balance to ensure digital innovation can thrive while governance and regulatory mechanisms are in place as the demand increases for AI for sustainable finance. This will help prevent risks to financial integrity as well as to consumers (cybersecurity, predatory lending practices and so forth). Further, policy makers can work toward international standards and agreements on data privacy, cybersecurity, digital identification and digital currencies (Sahay et al. 2020). AI for sustainable finance may present risks and contradictory, unintended or unexpected consequences. To effectively identify and manage the risks and opportunities related to AI for sustainable finance, there is a need for global dialogue and governance involving multiple stakeholders aligned with the SDGs. Partnerships (across and within sectors) and policies should be developed to share and bridge digital resources (data, knowledge, practices and tools) besides addressing topics with multiple lenses. This approach will aid in increasing standards consistency across institutions, digital equality and inclusion for underrepresented voices such
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
223
as women and traditionally marginalized groups and the interoperability of data and access for end users (UN Secretary-General’s High-level Panel on Digital Cooperation 2019). Serving as an impartial facilitator, bodies such as the United Nations can work with actors to develop AI for sustainable finance impact assessments and to ensure mechanisms that safeguard against data security and privacy issues (Hilbert 2017). Other major areas requiring coordination include the lack of harmonized standards and interoperability of technology, fragmentation of payment systems, lack of commonly accepted application programming interface standards and development of open-sourced platforms and a common payments ecosystem (Bank of International Settlements 2020; Ehrentraud et al. 2020).
7 Conclusion AI for sustainable finance is evolving rapidly. With its continued emergence, there will be both opportunities and risks related to sustainable development and financial stability that policy makers and regulators should consider. This chapter investigated the role and implications of AI in achieving the SDGs. To address current and future governance challenges, three key recommendations were provided to serve as a guidepost for AI for sustainable finance in both institutional and societal settings as well as through the COVID-19 pandemic. As with any innovation, AI can provide either opportunity or exacerbate social or environmental inequality, and responsibility falls on academics, policy makers, corporate actors, innovators and citizens to work toward solutions beneficial to the three pillars of sustainability.
Works Cited Aaron, Meyer, Francisco Rivadeneyra, and Samantha Sohal. 2017. Fintech: Is This Time Different? A Framework for Assessing Risks and Opportunities for Central Banks. Bank of Canada Staff Discussion Paper 2017–10. www.bankofcanada.ca/wp-content/uploads/2017/07/ sdp2017-10.pdf. Aganaba-Jeanty, Timiebi, Sam Anissimov, and Oonagh E. Fitzgerald. 2017. Blockchain ClimateCup Round Table. Conference Report, Centre for International Governance Innovation, November 6. www.cigionline.org/publications/blockchain-climatecup-round-table/. Alexander, Alex J., Shi Lin, and Bensam Solomon. 2017. How Fintech is Reaching the Poor in Africa and Asia : A Start-Up Perspective. EMCompass Note 34. International Finance Corporation, World Bank Group. Amran, Azlan, Shiau Ping Lee, and S. Susela Devi. 2014. The Influence of Governance Structure and Strategic Corporate Social Responsibility Toward Sustainability Reporting Quality. Business Strategy and the Environment 23 (4): 217–235. https://doi.org/10.1002/bse.1767. Anadon, Laura Diaz, Gabriel Chan, Alicia G. Harley, Kira Matus, Suerie Moon, Sharmila L. Murthy, and William C. Clark. 2016. Making Technological Innovation Work for Sustainable Development. Proceedings of the National Academy of Sciences of the United States of America 113 (35): 9682–9690. https://doi.org/10.1073/pnas.1525004113.
224
S. Pashang and O. Weber
Arner, Douglas W., Ross P. Buckley, Dirk A. Zetzsche, and Robin Veidt. 2020. Sustainability, FinTech and Financial Inclusion. European Business Organization Law Review 21 (1): 7–35. https://doi.org/10.1007/s40804-020-00183-y. Arthur, W. Brian. 2009. The Nature of Technology: What It Is and How It Evolves. New York: Free Press. Bank for International Settlements. 2020. Payment Aspects of Financial Inclusion in the Fintech Era. www.bis.org/cpmi/publ/d191.pdf. Bansal, Pratima. 2019. Sustainable Development in an Age of Disruption. Academy of Management Discoveries 5 (1). https://doi.org/10.5465/amd.2019.0001. BBC News. 2019. Tesla Will No Longer Accept Bitcoin Over Climate Concerns, Says Musk. BBC News, May 13. www.bbc.com/news/business-57096305. Bhagat, Ali, and Leanne Roderick. 2020. Banking on Refugees: Racialized Expropriation in the Fintech Era. Environment and Planning A: Economy and Space 52 (8): 1498–1515. https://doi. org/10.1177/0308518X20904070. Bharadwaj, Prashant, and Tavneet Suri. 2020. Improving Financial Inclusion through Digital Savings and Credit. AEA Papers and Proceedings 110 (May): 584–588. https://doi.org/10.1257/ pandp.20201084. Blakstad, Sofie, and Robert Allen. 2018. FinTech Revolution: Universal Inclusion in the New Financial Ecosystem. Cham: Palgrave Macmillan. Browne, Ryan, and Arjun Kharpal. 2021. Bitcoin Plunges 30% to $30,000 at One Point in Wild Session, Recovers Somewhat to $38,000. CNBC, May 19. www.cnbc.com/2021/05/19/bitcoin- btc-price-plunges-but-bottom-could-be-near-.html. Cambridge Centre for Alternative Finance, World Bank and World Economic Forum. 2020. The Global Covid-19 FinTech Market Rapid Assessment Study. Cambridge, UK: University of Cambridge. www3.weforum.org/docs/WEF_The_Global_Covid19_FinTech_Market_Rapid_ Assessment_Study_2020.pdf. Canadian Bankers Association. 2018. Read the CBA’s remarks to the House of Commons Finance Committee on the first 2018 Federal Budget Implementation Act. Remarks to the House of Commons Standing Committee on Finance regarding Bill C-74 (Budget ImplementationAct, 2018, No. 1.), May 9. www.cba.ca/remarks-house-of-commons-2018-budget-implementation-act. Canadian Institute for Advanced Research. n.d. Pan-Canadian AI Strategy. https://cifar.ca/ai/. Cantú, Carlos, and Bárbara Ulloa. 2020. The dawn of Fintech in Latin America: Landscape, Prospects and Challenges. BIS Papers No. 112. www.bis.org/publ/bppdf/bispap112.pdf. Carmichael, Kevin. 2020. Will the Coronavirus Prompt Central Bankers to Rethink Their Approach to Digital Currencies? Opinion, Centre for International Governance Innovation, May 25. www.cigionline.org/articles/ will-coronavirus-prompt-central-bankers-rethink-their-approach-digital-currencies. Casillas, Christian E., and Daniel M. Kammen. 2010. The Energy-Poverty-Climate Nexus. Science 330 (6008): 1181–1182. https://doi.org/10.1126/science.1197412. Castilla-Rubio, Juan Carlos, Simon Zadek, and Nick Robins. 2016. Fintech and Sustainable Development: Assessing the Implications. United Nations Environment Programme. https://wedocs.unep.org/bitstream/handle/20.500.11822/20724/Fintech_and_Sustainable_ Development_Assessing_the_Implications_Summary.pdf. Chetty, Krish, Jaya Josie, Ephafarus Mashotola, Babalwa Siswana, Kim Kariuki, Shenglin Ben, Zheren Wang, Edward Brient, Wenwei Li, and Man Luo. 2019. Forming a G20 Fintech Association Forum to Broker International Partnerships Promoting Financial Inclusion in Developing and Emerging Economies. www.g20-insights.org/wp-content/uploads/2019/05/ t20-japan-tf2-11-g20-fintech-association-forum-1.pdf. Cheung, Bernice. n.d. Consumer Adoption Of FinTech During The COVID-19 Pandemic. Environics Research. https://environics.ca/article/ consumer-adoption-of-fintech-during-the-covid-19-pandemic/. Choi, Kyoung Jin, Ryan Henry, Alfred Lehar, and Joel Reardon. 2021. A Proposal for a Canadian CBDC. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3786426.
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
225
Clopath, Claudia, Ruben De Winne, Mohammad Emtiyaz Khan, and Tom Schaul. 2019. AI for the Social Good. Report from Dagstuhl Seminar 19082. https://drops.dagstuhl.de/opus/volltexte/2019/10862/pdf/dagrep_v009_i002_p111_19082.pdf. Cornish, Edward. 1982. Communications Tomorrow: The Coming of the Information Society. Bethesda: World Future Society. Criddle, Cristina. 2021. Bitcoin Consumes ‘More Electricity Than Argentina.’ BBC News, February 10. www.bbc.com/news/technology-56012952. D’Silva, Derryl, Zuzana Filková, Frank Packer, and Siddharth Tiwari. 2019. The Design of Digital Financial Infrastructure: Lessons From India. BIS Papers No. 106. www.bis.org/publ/bppdf/ bispap106.pdf. Demirgüç-Kunt, Asli, Leora Klapper, Dorothe Singer, Saniya Ansar, and Jake Hess. 2017. The Global Findex Database 2017: Measuring Financial Inclusion and the Fintech Revolution. Washington, DC: International Bank for Reconstruction and Development and World Bank. www.worldbank.org/globalfindex. Dupas, Pascaline, Dean Karlan, Jonathan Robinson, and Diego Ubfal. 2018. Banking the Unbanked? Evidence from Three Countries. American Economic Journal: Applied Economics 10 (2): 257–297. https://doi.org/10.1257/app.20160597. Eakin, H.C., M.C. Lemos, and D.R. Nelson. 2014. Differentiating Capacities as a Means to Sustainable Climate Change Adaptation. Global Environmental Change 27 (1): 1–8. https:// doi.org/10.1016/j.gloenvcha.2014.04.013. Eccles, Robert G., Ioannis Ioannou, and George Serafeim. 2014. The Impact of Corporate Sustainability on Organizational Processes and Performance. Management Science 60 (11): 2835–2857. https://doi.org/10.1287/mnsc.2014.1984. Eccles, Robert G., and Judith C. Stroehle. 2018. Exploring Social Origins in the Construction of ESG Measures. Working Paper. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3212685. Ehrentraud, Johannes, Denise Garcia Ocampo, Lorena Garzoni, and Mateo Piccolo. 2020. Policy Responses to Fintech: A Cross-Country Overview. FSI Insights on Policy Implementation No. 23. www.bis.org/fsi/publ/insights23.pdf. ElAlfy, Amr, and Olaf Weber. 2019. Corporate Sustainability Reporting: The Case of the Banking Industry. CIGI Papers No. 211. Waterloo: CIGI. www.cigionline.org/publications/ corporate-sustainability-reporting-case-banking-industry/. Engert, Walter, Ben S. C. Fung, and Scott Hendry. 2018. Is a Cashless Society Problematic? Bank of Canada Staff Discussion Paper 2018–12. www.bankofcanada.ca/wp-content/uploads/2018/10/ sdp2018-12.pdf. Erdiaw-Kwasie, Michael O., and Khorshed Alam. 2016. Towards Understanding Digital Divide in Rural Partnerships and Development: A Framework and Evidence From Rural Australia. Journal of Rural Studies 43 (February): 214–224. https://doi.org/10.1016/j.jrurstud.2015.12.002. Faux, Zeke. 2020. From Micro-Credit to Major Debt. Bloomberg Businessweek, February 17. www.magzter.com/stories/Business/Bloomberg-Businessweek/ FROM-MICRO-CREDIT-TO-MAJOR-DEBT. Fay, Robert. 2019. Digital Platforms Require a Global Governance Framework. Opinion, Centre for International Governance Innovation, October 28. www.cigionline.org/articles/ digital-platforms-require-global-governance-framework/. FSB. 2017. Financial Stability Implications from Fintech: Supervisory and Regulatory Issues that Merit Authorities’ Attention. www.fsb.org/wp-content/uploads/R270617.pdf. ———. 2021. FinTech. www.fsb.org/work-of-the-fsb/financial-innovation-and-structural-change/ fintech. Flammer, Caroline, and Ioannis Ioannou. 2020. Strategic Management During the Financial Crisis: How Firms Adjust Their Strategic Investments in Response to Credit Market Disruptions. Strategic Management Journal 42 (7): 1275–1298. Folger-Laronde, Zachary, Sep Pashang, Leah Feor and Amr ElAlfy. 2020a. ESG Ratings and Financial Performance of Exchange-Traded Funds During the COVID-19 Pandemic. Journal
226
S. Pashang and O. Weber
of Sustainable Finance & Investment (June): 1–7. https://doi.org/10.1080/20430795.202 0.1782814. Gabor, Daniela, and Sally Brooks. 2017. The Digital Revolution in Financial Inclusion: International Development in the Fintech Era. New Political Economy 22 (4): 423–436. https:// doi.org/10.1080/13563467.2017.1259298. Geobey, Sean, Frances R. Westley, and Olaf Weber. 2012. Enabling Social Innovation through Developmental Social Finance. Journal of Social Entrepreneurship 3 (2): 151–165. https://doi. org/10.1080/19420676.2012.726006. GISR. 2018. Global Initiative for Sustainability Ratings. https://shift.tools/contributors/490/about. Global Legal Group. 2021. Canada: Fintech Laws and Regulations 2020. June. https://iclg.com/ practice-areas/fintech-laws-and-regulations/canada. Google. 2016. Machine Learning Finds New Ways for Our Data Centers to Save Energy. December. https://sustainability.google/projects/machine-learning/. Gordon, Julie. 2020. Pandemic accelerates need to consider digital currency: Bank of Canada. Reuters, October 14. www.reuters.com/article/us-canada-cenbank/ pandemic-accelerates-need-to-consider-digital-currency-bank-of-canada-idUKKBN26Z2R0. Greenvest and United Nations Environment Programme. 2017. Fintech, Green Finance and Developing Countries. http://unepinquiry.org/wp-content/uploads/2017/06/Fintech_Green_ Finance_and_Developing_Countries-input-paper.pdf. Griggs, David, Mark Stafford-Smith, Owen Gaffney, Johan Rockström, Marcus C. Öhman, Priya Shyamsundar, Will Steffen, Gisbert Glaser, Norichika Kanie, and Ian Noble. 2013. Sustainable Development Goals for People and Planet. Nature 495: 305–307. https://doi. org/10.1038/495305a. Gupta, Joyeeta, and Courtney Vegelin. 2016. Sustainable Development Goals and Inclusive Development. International Environmental Agreements: Politics, Law and Economics 16 (3): 433–448. https://doi.org/10.1007/s10784-016-9323-z. Harvard Kennedy School. n.d. Innovation and Access to Technologies for Sustainable Development. www.hks.harvard.edu/centers/mrcbg/programs/sustsci/activities/program-i nitiatives/ innovation/projects/innovation-and-access-to-technologies-for-sustainable-development. Hilbert, Martin. 2017. Digital Tools for Foresight. UNCTAD Research Paper No. 10. https://unctad.org/system/files/official-document/ser-rp-2017d10_en.pdf. IMF. 2019. Fintech: The Experience So Far. IMF Policy Paper. www.imf.org/en/Publications/ Policy-Papers/Issues/2019/06/27/Fintech-The-Experience-So-Far-47056. Initiative for Global Environmental Leadership. 2014. Sustainability in the Age of Big Data. Philadelphia: The Wharton School, University of Pennsylvania. http://d1c25a6gwz7q5e.cloudfront.net/reports/2014-09-12-Sustainability-in-the-Age-of-Big-Data.pdf. InterAcademy Council. 2004. Inventing a Better Future: A Strategy for Building Worldwide Capacities in Science and Technology. Amsterdam: InterAcademy Council. ITU. 2015. Millennium Development Goals (MDGs). www.itu.int/en/ITU-D/Statistics/Pages/intlcoop/mdg/default.aspx. ———. 2018. AI for Good Global Summit: Accelerating Progress Towards the SDGs. www.itu. int/en/ITU-T/AI/2018/Pages/default.aspx?source=jaai.de. Juma, Calestous, and Yee-Cheong Lee. 2005. Innovation: Applying Knowledge in Development. London: Earthscan Publishing. https://books.google.ca/books?id=TlojWAlQd44C&printsec=f rontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false. Kewell, Beth, Richard Adams, and Glenn Parry. 2017. Blockchain for Good? Strategic Change 26 (5): 429–437. https://doi.org/10.1002/jsc.2143. Khan, Mozzafar, George Serafeim, and Aaron Yoon. 2016. Corporate Sustainability: First Evidence on Materiality. The Accounting Review 91 (6): 1697–1724. https://doi.org/10.2308/accr-51383. Folger-Laronde, Zachary, Sep Pashang, Leah Feor, and Amr ElAlfy. 2020b. ESG Ratings and Financial Performance of Exchange-Traded Funds During the COVID-19 Pandemic. Journal of Sustainable Finance & Investment (June): 1–7. https://doi.org/10.1080/20430795.202 0.1782814.
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
227
Maaroof, Abbas. 2015. Big Data and the 2030 Agenda for Sustainable Development. United Nations Economic and Social Commission for Asia and the Pacific. www.unescap.org/sites/ default/files/1_BigData2030Agenda_stock-taking report_25.01.16.pdf. Macchiavello, Eugenia, and Michele Siri. 2020. Sustainable Finance and Fintech: Can Technology Contribute to Achieving Environmental Goals? A Preliminary Assessment of ‘Green FinTech.’ European Banking Institute Working Paper Series 2020 No. 71. https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=3672989. Malinak, S., J. Du, and G. Bala. 2018. Performance Tests of Insight, ESG Momentum, and Volume Signals. https://truvaluelabs.com/wp-content/uploads/2018/05/WP_PerfTest_R1k.pdf. Marsden, Janet H., and Valerie A. Wilkinson. 2018. Big Data Analytics and Corporate Social Responsibility: Making Sustainability Science Part of the Bottom Line. 2018 IEEE International Professional Communication Conference. https://ieeexplore.ieee.org/abstract/ document/8476826/. Mastercard. 2020. Mastercard Start Path: Fintech for Good. www.mastercard.com/news/media/ bz5nmfg4/mastercard_start_path__pitchbook_fintech_for_good_report.pdf. Melody, William, and Robin Mansell. 1986. Information and Communication Technologies: Social Science Research and Training. London: Economic and Social Research Council. Monteleoni, Claire, Gavin A. Schmidt, and Scott McQuade. 2013. Climate Informatics: Accelerating Discovering in Climate Science with Machine Learning. Computing in Science & Engineering 15 (5): 32–40. https://doi.org/10.1109/MCSE.2013.50. Nelson, Richard R., ed. 1993. National Innovation Systems: A Comparative Analysis. New York: Oxford University Press. https://books.google.ca/books?hl=en&lr=&id=C3Q8DwAAQBAJ& oi=fnd&pg=PR7&dq=nelson+1993&ots=diJ0lNyEkD&sig=0R0Jqyb7lPncrdYfmjptFEb3Pns #v=onepage&q&f=false. Nooteboom, Bart. 1992. Information Technology, Transaction Costs and the Decision to ‘Make Or Buy. Technology Analysis & Strategic Management 4 (4): 339–350. https://doi. org/10.1080/09537329208524105. Orol, Ronald. 2018. IMF Meetings to Tackle Fintech, Trade and Infrastructure. Opinion, Centre for International Governance Innovation, October 10. www.cigionline.org/articles/ imf-meetings-tackle-fintech-trade-and-infrastructure. Pashang. 2020. It Is Not Just A Pandemic: How The COVID-19 Mega-Crisis Affects Grief. Journal of Concurrent Disorders 2 (3): 40–54. Peterseil, Yakob, and Vildana Hajric. 2021. Bitcoin Dips Below $50,000 as Musk Calls Energy Usage ‘Insane.’ Al Jazeera, May 13. www.aljazeera.com/economy/2021/5/13/ bitcoin-dips-below-50000-as-musk-calls-energy-usage-insane. Pinkse, Jonatan, and Ans Kolk. 2012. Addressing the Climate Change — Sustainable Development Nexus: The Role of Multistakeholder Partnerships. Business & Society 51 (1): 176–210. https:// doi.org/10.1177/0007650311427426. Restoy, Fernando. 2019. Regulating Fintech: What Is Going On, and Where Are the Challenges? Speech delivered at the ASBA-BID-FELABAN XVI Banking public-private sector regional policy dialogue “Challenges and opportunities in the new financial ecosystem,” Washington, DC, October 16. www.bis.org/speeches/sp191017a.pdf. Rolnick, David, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman- Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, and Yoshua Bengio. 2019. Tackling Climate Change with Machine Learning. arXiv: 1906.05433. https://arxiv.org/pdf/1906.05433.pdf. Sahay, Ratna, Ulric Eriksson von Allmen, Amina Lahreche, Purva Khera, Sumiko Ogawa, Majid Bazarbash, and Kim Beaton. 2020. The Promise of Fintech: Financial Inclusion in the Post COVID-19 Era. Departmental Paper No. 20/09. Saylor, Michael. 2021. “Yesterday I was pleased to host a meeting between @elonmusk & the leading Bitcoin miners in North America. The miners have agreed to form the Bitcoin
228
S. Pashang and O. Weber
Mining Council to promote energy usage transparency & accelerate sustainability initiatives worldwide” (Twitter thread). Twitter, May 24, 3:47 p.m. https://twitter.com/michael_saylor/ status/1396915801492439044. Serafeim, George. 2020. Public Sentiment and the Price of Corporate Sustainability. Financial Analysts Journal 76 (2): 26–46. https://doi.org/10.1080/0015198X.2020.1723390. Strubell, Emma, Ananya Ganesh and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 28–August 2: 3645–50. https://aclanthology. org/P19-1355.pdf. Sylvester, Gerard, ed. 2019. E-Agriculture in Action: Blockchain for Agriculture: Opportunities and Challenges. Bangkok: Food and Agriculture Organization and ITU. www.fao.org/3/ ca2906en/ca2906en.pdf. Taddeo, Mariarosaria, and Luciano Floridi. 2018. How AI Can Be a Force For Good. Science 361 (6404): 751–752. https://doi.org/10.1126/science.aat5991. Temple, Christel N. 2010. The Emergence of Sankofa Practice in the United States: A Modern History. Journal of Black Studies 41 (1): 127–150. https://doi.org/10.1177/0021934709332464. Townsend, Blaine. 2020. From SRI to ESG: The Origins of Socially Responsible and Sustainable Investing. The Journal of Impact and ESG Investing 1 (1): 10–25. https://doi.org/10.3905/ jesg.2020.1.1.010. Truvalue Labs. 2020. Our Company. www.factset.com/about-our-company. United Nations. 2019a. Climate Change. https://sustainabledevelopment.un.org/topics/ climatechange. ———. 2019b. The Sustainable Development Goals Report 2019. New York: United Nations. www.un-ilibrary.org/content/books/9789210478878/read. United Nations Development Programme. 2018. Blockchain Links Serbian Diaspora and Their Families Back Home. Serbia (blog), July 3. www.rs.undp.org/content/serbia/en/home/ blog/2018/blockchain-links-serbian-diaspora-and-their-families-back-home.html. ———. 2019. Six Signature Solutions. www.undp.org/content/undp/en/home/six-signature- solutions.html. UN Global Pulse and GSMA. 2017. The State of Mobile Data for Social Good Report. June. www. unglobalpulse.org/wp-content/uploads/2017/06/Mobile_Data_for_Social_Good_Report.pdf. UN Secretary-General’s High-level Panel on Digital Cooperation. 2019. The Age of Digital Interdependence: Report of the UN Secretary-General’s High-level Panel on Digital Cooperation. https://digitallibrary.un.org/record/3865925?ln=en. van Wynsberghe, Aimee. 2021. Sustainable AI: AI for Sustainability and the Sustainability of AI. AI and Ethics 1 (February): 3. https://doi.org/10.1007/s43681-021-00043-6. Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (233): 1–10. https://doi.org/10.1038/s41467-019-14108-y. Weber, Olaf. 2018. The Financial Sector and the SDGs: Interconnections and Future Directions. CIGI Paper No. 201. www.cigionline.org/publications/ financial-sector-and-sdgs-interconnections-and-future-directions/. Weber, Olaf, and Blair Feltmate. 2016. Sustainable Banking and Finance: Managing the Social and Environmental Impact of Financial Institutions. Toronto: University of Toronto Press. World Bank. 2019. Record High Remittances Sent Globally in 2018. Press release,April 8. www.worldbank.org/en/news/press-release/2019/04/08/record-high-remittances-sent-globally-in-2018. World Bank Group. 2020. How Regulators Respond to Fintech: Evaluating the Different Approaches — Sandboxes and Beyond, Fintech Note No. 5. Washington, DC: International Bank for Reconstruction and Development and World Bank Group. https://documents. worldbank.org/curated/en/579101587660589857/pdf/How-Regulators-Respond-To-FinTech- Evaluating-the-Different-Approaches-Sandboxes-and-Beyond.pdf.
AI for Sustainable Finance: Governance Mechanisms for Institutional and Societal…
229
World Bank Group and IMF. 2018. The Bali Fintech Agenda—Chapeau Paper. https://documents1.worldbank.org/curated/en/390701539097118625/pdf/130563-B R-P UBLIC- on-10-11-18-2-30-AM-BFA-2018-Sep-Bali-Fintech-Agenda-Board-Paper.pdf. World Economic Forum. 2019. A New Circular Vision for Electronics: Time for a Global Reboot. January. Geneva: World Economic Forum. www3.weforum.org/docs/WEF_A_New_Circular_ Vision_for_Electronics.pdf. World Health Organization. n.d. WHO Coronavirus (COVID-19) Dashboard. Accessed 30 May 2021. https://covid19.who.int/. Young, H. Peyton. 2011. The Dynamics of Social Innovation. Proceedings of the National Academy of Sciences of the United States of America 108 (4): 21285–21291. www.pnas.org/ content/pnas/108/Supplement_4/21285.full.pdf. Zest AI. 2020. Model Management System. www.zest.ai/product.
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder Partnerships in the Digital Age Marianna Capasso and Steven Umbrello
Abstract The pervasiveness of AI-empowered technologies across multiple sectors has led to drastic changes concerning traditional social practices and how we relate to one another. Moreover, market-driven Big Tech corporations are now entering public domains, and concerns have been raised that they may even influence public agenda and research. Therefore, this chapter focusses on assessing and evaluating what kind of business model is desirable to incentivise the AI for Social Good (AI4SG) factors. In particular, the chapter explores the implications of this discourse for SDG #17 (global partnership) and how this goal may encourage Big Tech corporations to strengthen multi-stakeholder partnerships that promote effective public-private and civil society partnerships and the meaningful co-presence of non-market and market values. In doing so, the chapter proposes an analysis of the sociological notion of ‘social license to operate’ (SLO) elaborated in the mining and extractive industry literature and introduces it into the discourse on sustainable digital business models and responsible management of risks in the digital age. This serves to explore how such a social license can be adopted as a practice by digital business models to foster trust, collaboration and coordination among different actors – including AI researchers and initiatives, institutions and civil society at large – for the support of SDGs interrelated targets and goals. Keywords Big Tech corporations · AI4SDG · Social license · Public-private partnerships · Sustainability
M. Capasso (*) Scuola Superiore Sant’Anna, Pisa, Italy e-mail: [email protected] S. Umbrello Delft University of Technology, Delft, the Netherlands e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_13
231
232
M. Capasso and S. Umbrello
1 Introduction Artificial intelligence (AI) systems have and continue to entrench themselves into the ever complex sociotechnical infrastructures that characterise our modern digital world. These systems drive many of our everyday tools like vehicles, smartphones, entertainment systems, financial instruments, education practices, retail and healthcare. However, the often opaque, complex nature of the techniques underlying these systems makes their behaviours challenging to track and trace and, thus, hard to predict. With this uncertainty comes new and challenging ethical issues that we must confront head-on, given the ubiquity, pervasiveness and impact that these systems have and will have on our lives and societies. We already see the consequences of many of these seemingly common, albeit impactful AI-driven technologies on how we relate to each other and our traditional social practices. Much of this, aside from the difficulty of managing the challenges of the underlying AI technologies themselves, is that such AI techniques are often not constrained to a single domain of application but instead come in the form of commercially available (and thus easily accessible) household technologies. Technologies like Amazon Alexa can and are easily upskilled to include novel capabilities and services not native to the device. Consequently, the Big Tech corporations behind this AI upskilling of more basic systems become entangled with public domains such as public healthcare services and many others. This enmeshment of private corporate bodies with traditional public domains is cause for concern, given the undue influence that these economic giants can have not only on public research and agendas but also on the everyday interactions that private citizens have concerning those public spheres. In response to this challenge, this chapter focuses on assessing and evaluating what kind of business model is desirable to incentivise the AI for Social Good (AI4SG) factors in order to better manage this merging of domains. The AI4SG factors proposed by Floridi et al. (2020) provide a robust normative basis for how designers should approach the design and deployment of AI systems towards supporting social good. Likewise, there is a growing body of research on how these AI4SG norms can be used to support higher-order values like the United Nations Sustainable Development Goals (UN SDGs). In particular, the chapter explores the implications of this discourse for SDG #17 (global partnership) and how this goal may encourage Big Tech corporations to strengthen multi-stakeholder partnerships that promote effective public- private and civil society partnerships and the meaningful co-presence of non-market and market values. To do this, the chapter proposes an analysis of the ‘social license to operate’ – a notion firstly originated from the extractive and mining industry – and introduces it into the discourse on sustainable digital business models and responsible management of risks in the digital age. Adopting these frameworks serves to explore how such a social license can be adopted as a practice by digital business models to foster trust, collaboration and coordination among different actors, including AI researchers and initiatives, institutions and civil society at large to support the SDGs interrelated targets and goals.
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
233
2 UNs SDGs Framework and Its Link with AI Challenges and Impacts 2.1 The When and Why of the UN SDGs In 2015, the United Nations and all member states adopted the 2030 agenda for sustainable development. This 2030 agenda proposed objectives to design and implement a worldwide safe and sustainable future (United Nations 2015). At its foundation are 17 Sustainable Development Goals (SDGs). The adopted proposal recognises that the SDGs co-constitute and co-vary with one another. As a result, despite their numerical designations, they are not mutually exclusive of one another, rank-ordered or framed as trade-offs. For example, SDGs such as the ending of poverty (SDG #1) and climate change remediation (SDG #13) go hand in hand (Schwan 2019). Among ending poverty and climate change action, there are goals such as ‘affordable and clean energy’ (SDG #7), ‘industry, innovation and infrastructure’ (SDG #9) and ‘sustainable cities and communities’ (SDG #11) just to name a few (Fig. 1). This means that to achieve the stated goals of the 2030 proposal, an integrated and comprehensive understanding of the goals is necessary. Reading the goals, then, as being separate or as rank-ordered is not the correct approach. Instead, they are best read as being mutually co-constitutive of one another. Furthermore, a more general understanding of global system’s thinking and complexity sciences is critical to understanding the various effects of different artefacts and subsystems within a more extensive interactive network, rather than the isolation of discrete entities (Ballew et al. 2019; Briscoe 2015; van de Poel 2020). The resulting complexity of
Fig. 1 United nations sustainable development goals. (Source: Schwan 2019)
234
M. Capasso and S. Umbrello
the covariance and interaction of entities, whether they are humans, rainforests, institutions or technologies, means that equal if not greater interdisciplinarity from numerous fields is required to comprehend and anticipate the effects of different nodes within a more extensive sociotechnical system (Murphy et al. 2015). These systemic effects did not go unignored by the General Assembly. As a result, the UN established the Technology Facilitation Mechanism (TFM) to promote innovative solutions for the SDG agenda, viz. multi-stakeholder collaboration (United Nations 2015). The TFM council meets before every high-level UN meeting on the SDGs to discuss innovative solutions to achieve those goals. Thus, the UN has an institutional orientation towards technology as both the problem and potential solution to global issues. In doing so, the UN explicitly adopted an interactive stance towards understanding the impacts of technology is significant. This means that instead of viewing technology as purely deterministic or instrumental, it affirms the interactional nature of technology and social factors at an institutional level, permitting a landscape of comprehensive expertise to address these problems en masse, rather than haphazardly. Therefore, we can understand SDGs as partially emerging due to technological development and the potential avenues for amelioration in addressing them. This, of course, does not necessarily entail that every problem requires a high-tech solution (nor that such a solution exists) but that institutional or even conceptual solutions exist to high-tech problems. For example, algorithmic trading agents make rapid stock market trades relatively easy given the efficiency of trading speeds and data analytics to increase the probability that profitable trades are made. However, the economic impacts of such AI systems can be potentially egregious given their relative inaccessibility to all but those organisations that can afford the expensive algorithms. This can easily lead to an excessively unfair marketplace. The solution to such a problem need not be high-tech but can come about through equitable regulations in institutions limiting the times and quantities of trades to promote a fairer marketspace for smaller organisations. Analysing these complex solutions by tackling their interdependencies makes for more robust and more productive solutions. Thus, artificial intelligence, being part of a larger milieu of ICTs and disruptive technologies, can be understood as ways of realising the goals of SDGs in a similarly holistic way, leveraging the power of big data analytics and machine learning technologies all framed within a design perspective to direct its development towards socially beneficial ends in the service of SDG attainment and human rights. A salient example would be using AI systems to develop Operator 4.0 technologies used in intelligent production manufacturing domains. Such systems support operators by extending their cognitive, sensorial, physical and interactional capacities to increase production efficiency as well as aptly diagnose and design technological development towards beneficial ends (Gazzaneo et al. 2020; Longo et al. 2017; Vernim et al. 2022). Doing so not only increases productivity and thus the potential availability/accessibility of goods such as energy production devices and medical instruments but also provides a safer working environment for operators. The more extensive network of indirect stakeholders is similarly implicated, such as the geopolitical entities that host such production firms and the general public that depend on such technologies. Multiple SDGs are thus involved in such as ‘affordable and clean energy’ (#7) and ‘industry, innovation and infrastructure’ (#9).
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
235
These goals similarly inspire the development of new technologies. For example, goal #5 of the UN’s agenda aims at gender equality and reducing global physical and sexual violence against women and girls. Towards this end, the peace advocacy group Amnesty International developed and launched the ‘Panic Button’ app in 2014, permitting users to leverage their networks to report attacks, kidnappings or torture (Amnesty International 2014). The panic button on their phone allows individuals who may face such dangers to have a powerful way of signalling abuse, exemplifying technology’s ability to be designed to ‘fight’ for human rights and gender equality. Another salient example of how the issues driving the SDGs inspire novel technology is AI in agriculture. Crop disease has been a leading source of global hunger (goal #1) and poverty (goal #2) (Quinn et al. 2011). Given the continual increase in the need for sustainable food production, accessible AI solutions to aid individual farmers, particularly in developing countries, are required to assist in managing factors such as predictions for crop yield (You et al. 2017), growing conditions (Kersting et al. 2012), price forecasting (Ma et al. 2019) and crop choice recommendation (Von Lücken and Brunelli 2008) among others. To this end, the Artificial Intelligence & Data Science Lab at Makerere University in Uganda developed and released the mCrops app diagnostic tools for diagnosing viral crop diseases in cassava crops, one of the important staple food crops in the country and highly susceptible to viral disease (Quinn et al. 2011). This section aimed to outline the UN’s SDG their covariance with technologies, that is, how technologies can be understood as both the causes of the SDGs and potential solutions. Similarly, how the SDG inspires new technologies is briefly explored as well as some examples. The following section outlines the seven AI4SG factors.
2.2 AI for Social Good In response to the continually growing number of guidelines, frameworks and lists of principles and practices towards socially beneficial AI systems, Floridi et al. developed a set of seven distilled norms to guide designers towards the best practices for designing AI for Social Good (AI4SG) [see Table 1]. Similarly, given the number of definitions of AI, many of which often describe systems that are not strictly AI, we adopt the definition of AI adopted by the latest Artificial Intelligence Act, since it suggests a single-future proof definition of AI: ‘Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with (European Commission 2021).1
AIA 2021, 39; cf. Annexe 1 on Artificial Intelligence Techniques and Approaches: (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) statistical approaches, Bayesian estimation, search and optimisation methods; see European Commission 2021. 1
236
M. Capasso and S. Umbrello
Table 1 AI for social good factors and norms AI4SG factor 1. Falsifiability and incremental deployment 2. Safeguards against the manipulation of predictors 3. Receiver- contextualised intervention
4. Receiver- contextualised explanation and transparent purposes
AI4SG factor norm AI4SG designers should identify falsifiable requirements and test them in incremental steps from the lab to the ‘outside world’ (Floridi et al. 2020, p. 7) AI4SG designers should adopt safeguards that (i) ensure that non-causal indicators do not inappropriately skew interventions and (ii) limit, when appropriate, knowledge of how inputs affect outputs from AI4SG systems to prevent manipulation (Floridi et al. 2020, p. 8) AI4SG designers should build decision-making systems in consultation with users interacting with and impacted by these systems; with understanding of users’ characteristics, of the methods of coordination and of the purposes and effects of an intervention and with respect for users’ right to ignore or modify interventions (Floridi et al. 2020, p. 9) AI4SG designers should choose a level of abstraction for AI explanation that fulfils the desired explanatory purpose and is appropriate to the system and the receivers and then deploy arguments that are rationally and suitably persuasive for the receivers to deliver the explanation and ensure that the goal (the system’s purpose) for which an AI4SG system is developed and deployed is knowable to receivers of its outputs by default (Floridi et al. 2020, p. 14) AI4SG designers should respect the threshold of consent established for the processing of datasets of personal data (Floridi et al. 2020, p. 16)
5. Privacy protection and data subject consent 6. Situational fairness AI4SG designers should remove from relevant datasets variables and proxies that are irrelevant to an outcome, except when their inclusion supports inclusivity, safety or other ethical imperatives (Floridi et al. 2020, p. 18) 7. Human-friendly AI4SG designers should not hinder the ability for people to semanticise semanticisation (i.e. to give meaning to and make sense of) something (Floridi et al. 2020, p. 19) Reproduced from Capasso and Umbrello (2021)
Recently, some scholars have used the term AI4SG to describe work on AI aimed at the SDGs and to evaluate AI impacts in terms of direct and direct implications on the seventeen SDGs (Tomašev et al., 2020; Vinuesa et al., 2020; Sætra, 2021a, b; Umbrello and van de Poel, 2021). However, given the global impacts that AI systems can have across multiple domains, their ubiquity as well as their pervasiveness in our sociotechnical infrastructures, it makes sense to ask how AI can be designed to support higher-order values like the SDGs and not only the values often implicated by AI like explicability, privacy and human autonomy (Fig. 2). The AI for Good Foundation is an excellent example of a non-profit entity coming together in collaboration with academic, institutional and governmental bodies to promote AI not only as the subject of being designed for the social good but also as a tool that can be used to support the social good in the form of the SDGs. This is also echoed in the work of Umbrello and van de Poel (2021). They argue that a value sensitive design approach towards technology design can be modified sufficiently to address the unique challenges posed by AI systems. As a result, salient
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
237
design can draw on the UN’s SDGs as a guide for determining values to design for (i.e. doing good/beneficial outcomes) as well as avoiding harm using the norms described by the AI4SG norms. An example of how to visualise this can be seen in Table 1. Naturally, however, the motivations for design differ across different projects. As a result, there is no normative starting point that designers must begin with. The UN’s interactional stance maps neatly onto existing design methodologies like value sensitive design, given that VSD is also an approach predicated in the interactional stance. From this point then, technology design can begin with the discrete technology itself as a starting point, the context of use or a specific value. For the sake of explaining how the approach functions, we begin from the left side of the figure – i.e. ‘Doing Good’ – to illustrate. Engineers can start by determining and explicitly stating which of the SDGs they aim to contribute to, given the type of AI system they are currently engaged to design. In doing so, different SDG resolutions or ameliorations might call for different AI solutions that may be more aptly suited rather than others. Identifying which might be most efficacious towards addressing SDGs can then be used to determine a standard core set of values such as transparency, explicability or data privacy (i.e. the centre of the figure). Various contextual variables come into play that impact the way values are understood, both in conceptual terms and in practice, on account of different sociocultural and political norms. Eliciting stakeholders in sociocultural contexts becomes imperative within the approach (i.e. working within the bounds to support
Fig. 2 Doing good and avoiding harm with AI4SG norms
238
M. Capasso and S. Umbrello
SDG #17) to determine if the a priori explicated values of the project faithfully map onto those of the stakeholders, both direct and indirect stakeholders. In engaging with the context-situated nuances of how various values may come to play with any given system, various pitfalls and constraints can begin to be envisioned, particularly how the initial core values can be understood in terms of technical design requirements. These values can then be used to distil specific technical design requirements by using normative imperatives, in the case of AI, the AI4SG principles. In sum, AI has already manifested pervasive impacts on a global level. To meet these challenges, the AI4SG norms were developed as a distilled set of design principles to help achieve salient AI design. Still, it makes sense to ask how the AI4SG principles relate to higher-order goals like the SDGs. This section aimed to discuss what the SDGs were and how the SDGs can be supported in tandem with and by the AI4SG norms. Still, this remains relatively novel in terms of its applicability. Given the impacts of AI systems, what is required is greater uptake of an explicit orientation of using the AI4SG principles to support and further the SDGs. The following sections will discuss how to move towards sustainable business models as well as the concept and necessity for a ‘social license to operate’ concerning AI systems, in particular, the application of this social license to Big Tech corporations, arguably the source of the most impactful and forms of AI that have a global diaspora.
3 Towards Sustainable Digital Business Models: Some Reflections on the Co-presence of Different Spheres and Values The pervasiveness of AI-empowered technologies across multiple sectors has led to drastic changes concerning traditional social practices and how we relate to one another. These technologies are often not constrained or exclusive to any given domain of application. Instead, they are commercially available and ubiquitous systems often upskilled by providers – typically Big Tech giants – to assimilate new functionalities and practices. ‘Big Tech corporations’ refer to the four or five largest companies dominant in the information technology sector, including Google, Amazon, Facebook, Apple and Microsoft. These corporations are now entering public spheres such as healthcare. For example, Amazon announced a new partnership with the UK’s National Health Service (NHS) that enabled Amazon’s digital voice assistant Alexa to offer NHS health advice to users at home (Department of Health and Social Care, 2019). To this end, these Big Tech giants are becoming ever more entangled and diffused within the public sphere. This has been exacerbated by the pandemic and subsequent lockdowns, making private individuals more dependent on home technologies that can provide these health services during a public health crisis (Vargo et al. 2020).
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
239
Technology ethicists have raised a growing concern on the predominant impact of private and market-driven corporations on shaping public agendas and research (Sharon 2016, 2021). However, this trend is not new: a piece of worrisome information and power asymmetry related to the introduction of AI systems and Big Data was already outlined in the Black Box metaphor by Frank Pasquale, who argued that the politico-economic advantages of ‘informational exclusivity’ by private corporations could reinforce inequalities and lack of responsibility and accountability in the whole of society (Pasquale 2015, 193). In contrast to traditional business models that sell goods and services, Big Tech corporations have now access to large data sets and a vast number of resources, and this makes them critical market makers, entities that do not just provide services but an entire infrastructure (Srnicek 2016; Zysman and Kenney 2018). Indeed, such corporations exercise control on essential services on which many different actors and the whole economic ecosystem depend (Rahman 2018; Rahman and Thelen 2019). Moreover, scholars have sustained that in this way, Big Tech corporations may have not only substantial economic and market power but also a political ‘platform power’ that stems directly from their consumers and users, who intimately appreciate and rely on those corporations and tend to provide opposition to governmental regulations that treat such corporations’ convenience and innovation (Culpepper and Thelen 2019). Thus, to sharpen our understanding of Big Tech corporations’ power and new emerging technologies, we need a framework that allows us to explore the role of direct and indirect stakeholders concerning corporations and government, as well as means and modalities to integrate private power and public governance into a policy discussion. The influencing of public opinion and domains by digital business powers may have substantial political and social implications. Therefore, it is vital to open a serious discussion on what kind of business model(s) is desirable to incentivise the AI for Social Good (AI4SG) factors in the digital world. UNs SDGs framework can provide a valuable framework for assessing the impacts of AI, understood not as a neutral tool but as part of a more extensive sociotechnical system: an entanglement of technical, social and institutional dimensions, where also economic and political interests are at stake (Sætra 2021b). Politics should not be eliminated from the three dimensions of sustainability – economic, social and environmental (UN 2015) – but should innervate them from within. As already noted, several recent studies have already hinted at the potential implications of developing and using AI for social good. For example, within the debate on SDGs concerning the economy, scholars have claimed that AI can significantly impact SDGs #8 (decent work and economic growth), #9 (industry, innovation and infrastructure) and #10 (reduced inequalities) (Vinuesa and et al. 2020). However, other approaches focus instead on business models and the role of AI from the perspective of SDG #12 (responsible consumption and production) (e.g. Di Vaio 2020), looking at how AI may integrate social and environmental needs into current and future trends of sustainable business models. Thus, there is extensive literature that assesses and evaluates the new role of work and industry due to the introduction of AI. Still, little has been said about AI’s
240
M. Capasso and S. Umbrello
possible long-term positive effects on the economy and as an enabler for social and economic-related SDG targets and indicators, especially those concerning collaborations between different actors, including business models and non-market-driven realities. For example, Vinuesa and colleagues did not find much published empirical evidence of AI as an enabler or inhibitor of SDG #17 (global partnership for sustainable development) and its various targets.2 Nonetheless, they sustain that several initiatives that focus on the humanistic side of AI can be a means to achieve effective public-private and civil society partnerships and policy coherence for sustainable development (Vinuesa et al. 2020, supplementary data 1).3 They also recognised that AI-driven systems are not so easily subject to the oversight or accountability of public experts. However, such systems are massively entering and influencing core social domains, such as healthcare, criminal justice, education and so on (Vinuesa et al. 2020, supplementary data 1; Reisman and al. 2018). Sætra asserted that SDG #17 is part of a group of goals on which AI have minor or no direct effects and limited indirect effects; nonetheless, he recognises that ‘AI play a key role as the subject matter both for regulations and policy for the partnership for sustainable development’ (Sætra 2021b, 15, italics by authors). Among the initiatives that monitor AI4SG’s advancements, the Oxford Initiative on AIxSDG is a curated database of AI projects addressing SDGs launched in 2019 (Cowls et al. 2021). Presently, in its online repository, four projects can be found that promote the ‘partnership for the goals’ SDG; however, those ‘partnerships’ are related either to specialised communities, such as those of the astronomers and hospital staff or national policies and governments.4 However, SDG #17 should also aim at promoting global partnership and cooperation built upon shared values and principles. In particular, concerning technology, SDG #17 established in target 17.6 the Technology Facilitation Mechanism (TFM), as already mentioned. TFM intended to be a multi-stakeholder mechanism including UN agencies, governments and various stakeholders to deliver science, technology and innovation (STI) for the SDGs (UN 2015, para. 123). Unfortunately, as highlighted in the Spotlight Global Civil Society Report on the 2030 Agenda and SDGs, TFM is still lacking an online platform due to the absence of dedicated funding and has an ‘untapped potential’, since it should not be a forum only for proponents of technology but include the direct participation of people that are affected by it (Daño 2019, 188). In a few Vineusa et al. (2020) found evidence of positive AI contributions on 15% of SDG 17’s subgoals and negative contributions to 5% of its subgoals. 3 Specifically, Vineusa et al. (2020) referred to Open AI (project description: https://openai.com/); partnership for AI (project description: https://www.partnershiponai.org/); AINow (project description: https://ainowinstitute.org/); AI Sustainability Centre in Stockholm (project description: http:// www.aisustainability.org/). They also provided reference to Smith & Neupane (2018) and Greene et al. (2019). 4 Oxford Initiative on AIxSDGs. https://www.sbs.ox.ac.uk/research/centres-and-initiatives/oxford- initiative-aisdgs. On the projects related to the promotion of SDG 17, see https://www.aiforsdgs. org/all-projects?sustainable_development%5B%5D=1356&search=d (Last access 4 October 2021). 2
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
241
words, we can say that more ‘societal deliberations’5 on how sociotechnical systems are now impacting norms and SDGs and on how this process should be regulated are still needed and still have vague implementation. Collective responsibility for sustainability, especially in the digital era of Big Tech corporations, cannot underestimate the role that private-public partnerships (PPPs) and multi-stakeholder initiatives as mechanisms may have in fostering social responses to emerging technology changes and also in redistributing power and resources in more equal modalities, both nationally and globally. Moreover, when such PPPs and initiatives are placed in a proper and democratic regulatory- institutional environment, they can provide better infrastructures to citizens and improve interrelated capacities between different groups, which should be considered integral parts of a whole. However, the mechanisms and conceptual frameworks for benchmarking such PPPs and multi-stakeholder engagement are mostly vacuous or altogether side-lined in these discussions. This paper proposes the concept of a ‘social license to operate’ to better frame how multiple stakeholders come to trust and, consequently, accept an industry’s legitimate position to operate in their community. The following section defines this social license to operate as well as why it is required in the digital age.
4 The Need for a ‘Social License to Operate’ in the Digital Age The notion of a ‘social license to operate’ (SLO) is not new: indeed, it has increasingly taken a fundamental role in the business literature on sustainability over the years. It was coined concerning the mining and extractive industry but is now used in a range of other industry sectors, and it is generally defined as the acceptance and trust gained by a business model or corporation by the community in which it is placed and operates (Moffat and et al. 2015; Komnitsas 2020). Having a social license to operate means having legitimacy from internal stakeholders and outside stakeholders, and the greater community. Most importantly, it means identifying a business model as a proper social institution: beyond economic and market considerations, every business model is a social entity and thus subject to public accountability and public control (Sale 2019; Melé and Armengou 2015). Social license means also going beyond laws and regulations positioned within the legal system since it is related to credibility and social permission practices. As such, the concept of a social license is based on building and structuring trust and consent of people and communities affected by the business model’s actions at stake. Social license theorists do not align on understanding and measuring the value of social license (Gehman and et al. 2017). Nonetheless, the term’s popularity is a sign Such a term is used also by Daño (2019), 188.
5
242
M. Capasso and S. Umbrello
of a general trend towards stakeholder involvement and democratic procedures in the industry literature. One of the most used presentations of social license is the one elaborated by Boutilier and Thomson in the so-called multi-level pyramid model (Boutilier and Thomson 2011). In this model, theorists distinguish between three levels: legitimacy, credibility and trust. SLO includes these three normative components: legitimacy as conformity to norms, credibility as the power to elicit belief and trust as the willingness to be vulnerable to risk or loss on the part of other actors (Thomson and Joyce 2008). Legitimacy is a necessary component of acceptance by stakeholder networks,6 while credibility means that those networks also approve a business model with formal negotiations or agreements on roles and responsibilities. Finally, trust implies a sense of co-ownership or identification between stakeholders, community and business models through the means of collaborations or shared experiences (Gehman and et al. 2017; Boutilier and Thomson 2011; Thompson and Boutilier 2011) (Fig. 3). Even if explored concerning well-established corporate frameworks, a discourse on the social license to operate can be extended beyond those sectors for measuring its adaptability and feasibility in the context of new forms of corporations. Thus, for example, introducing sociological considerations into the business literature of sustainability can constitute an asset in the current approaches to AI4SG since these considerations can place an explanatory emphasis on possible trustworthy behaviours by the part of private Big Techs that have an extensive public impact and should account for it. Until now, few scholars have been concerned with a social license in relation to new digital business models and innovation. For example, some have individuated in social license a possible constraint on regulatory arbitrage, i.e. taking advantage
Fig. 3 The pyramid model of SLO. (Reproduced from Boutilier and Thomson 2011: 2) Boutilier and Thomson speak of ‘stakeholder networks’ to include many actors that are affected or affect business models beyond and above specific and local communities, such as international human rights activists and others (Boutilier and Thomson 2011, 2–3). 6
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
243
of gaps in existing regulations, by the part of companies such as Facebook or Uber (Pollman 2019), while others have explored how the failure to account for the inherently public nature of corporate actions of private business models such as Uber – regardless of whether an existing ‘legal’ license exists – can result in the loss of ‘social’ license (Sale 2019). Finally, others have highlighted the need to earn a social license for big data initiatives during the pandemic (Shaw and et al. 2020) or have specifically introduced the issue of SLO in the governance and responsible management of the risk of digital corporations, but without providing straightforward suggestions on how to implement in concrete terms SLO in Big Techs’ proactive strategic business models (Verbin 2020, Chap. 8).7 Along those lines, this chapter argues that it is of pivotal importance to initiate a reflection on new global digital business models through the lens of what kind of social license they need. In particular, the sociological literature on the social license can provide a valuable and concrete contribution to the question of sustainability of Big Tech corporations for several reasons. First, SLO could be an integral part of a corporate strategy that may assist sociotechnical systems involving AI-driven systems to stay ahead of legal regulation and proactively endorse a collective responsibility for sustainability in the digital era. Indeed, as a form of long-term and self-regulation that implies fair and legitimate procedures, it may contribute to the formation and ongoing evaluation of digital business models’ socio-political rights and responsibilities. SLO can assist such digital business models in earning social acceptability, programmatically including novel accounts of transparency and accountability relationships and avoiding episodes of corruption or malpractice into their policies and business strategies. Second, the predominance of the economy of credibility sustained by SLO can be an effective tool for digital business models to ensure sustainable business growth. Unlike traditional business models that rely on supply and demand mechanisms, Big Tech has its users and consumer groups at their core, as already noted. Therefore, internal forms of control that paid attention to social license would be crucial, with the aim to create bilateral processes of change, through an ongoing dialogue with users’ communities and relevant stakeholders; the understanding of users’ and consumers’ changing expectations; the deployment of regular reporting requirements, mitigation and monitoring programs; and so on. Indeed, SLO means searching not only for acceptance but also for approval from the community: beyond the participation of shareholders, SLO aims at investing in the community, with corporate social initiatives that support or raise awareness on specific social causes through the mechanisms of employment policies, employee training, marketing or funds and volunteering (see on this Lee and Kotler 2005; Boutlier 2017). Much of this aspect of SLO, in terms of being operationalised, viz. AI4SG norms, can be achieved via full life cycle monitoring of systems, allowing
See also Joseph, L.2018 Why the tech giants of Silicon Valley must rebuild trust after explosive beginnings available at https://www.weforum.org/agenda/2018/11/why-move-fast-and-breakthings-doesn-t-cut-it-anymore/ (last access October 4, 2021). 7
244
M. Capasso and S. Umbrello
designers and stakeholders to continually monitor system inputs/outputs and restrict use and redesign if necessary (c.f., Umbrello and van de Poel, 2021). Finally, SLO may serve as a powerful practise in the public-private dialectic. The risky decisions of a Big Tech may extend well beyond it and reach the general public, and, as scholars already point out, AI effects can be analysed not only in terms of micro and meso but also in terms of macro levels (Sætra 2021a, b). Following SLO operationalisation, social legitimacy and credibility that should be granted to Big Tech for regulating and delivering essential services related to common goods such as health, security and many others need to be also accompanied by a more enduring value: trust. Trust is a matter of value alignment and of establishing principles and norms on which collectively rely on. Social license is often connected to the theories of the social contract (Demuijnck and Fasterling 2016). If we want to translate this discourse in the digital realm, it sheds light on the fact that we are embedded in a network of mutual relationships between multiple parties. Those parties have different levels of powers and values but should be equitably enabled to flourish and be responsible for their actions. The literature on SLO critically engages with the issue of how to balance power relations, with the involvement of a multiplicity of cross-sectoral authorities and agencies, including business models, state or regional governments, international expert agencies, NGOs and many others (Meesters and Behagel 2017). Proposing co-evolution and co-regulation mechanisms and tools constitutes a first step in developing an enduring relationship of trust between those parties. For example, among those mechanisms and tools, we can insert reports on commitments produced by business models that can be monitored and overseen by NGOs or other third-party actors (Morrison 2014; Blair et al. 2008); collaboration between business models and external stakeholders, such as policymakers or civil society organisation, to address cultural and social issues or human rights violations; and cooperation with external stakeholders, such as experts or governments, to engage or communicate with the public more effectively and transparently or to manage environmental, social, governance risks and so on. If ‘institutionalised trust’ lacks – which in SLO theories implies that the interactional relationships between business models and stakeholders’ institutions are based on an ‘enduring regard’ for each other’s interests (Boutilier and Thomson 2011, 4) – psychological identification is understood as a status of well-established trust is unlikely. Losing LSO is a socio-political risk. Big Tech corporations have already been investigated for violations of trust: from breaching competition and monopoly laws and abusing their dominance in the online market8 to the breach of users’ privacy rights, as demonstrated in the case of the Cambridge Analytica Scandal (Isaak and Hanna 2018). Moreover, a kind of ‘regulatory inertia’ in recent years has placed Big
See, for example, Schulze 2019. If you want to know what a US tech crackdown may look like, check out what Europe did, June 7, 2019, available at https://www.cnbc.com/2019/06/07/how- google-facebook-amazon-and-apple-faced-eu-tech-antitrust-rules.html (last access October 4, 2021). 8
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
245
Techs in a position to operate without the need to ensure compliance to international principles or considerations of sustainable development (Truby 2020). However, beyond possible legal, regulatory intervention, it would undoubtedly be significantly beneficial to ensure trustworthiness and public scrutiny on the decisions and actions of Big Techs’ new digital business model, especially in modalities that make the latter understand their responsibility towards society. The ‘social license to operate’ can be adopted as a practice to foster global collaboration and coordination among different spheres: private business models, AI researchers, AI-based initiatives focusing on SDGs, institutions, legislators, policymakers and civil society at large. If further implemented and developed, its theoretical framework can represent a more comprehensive approach to the sustainability of new digital business models, paving the way for being synthesised in a practical methodology that assists AI projects, initiatives and sociotechnical systems in their support of SDGs.
5 Conclusion The AI for Social Good norms are a growing set of design imperatives that aim at designing AI towards the social good. However, despite many projects exploring how these norms can be operationalised towards achieving higher-order values like those of the UN Sustainable Development Goals, they include little guidance for how their uptake can be increased by the existing business models of Big Tech corporations. The tech giants are arguably the most impactful market players when it comes to the digital age. However, they operate seemingly autonomously despite the impacts they have on multiple stakeholders. This chapter looks at the types of business models that have a greater propensity to operationalise and forward the AI4SG norms towards supporting global goals like those of the UN SDGs. In doing so, we introduced the concept of the ‘social license to operate’ (SLO). This sociological notion has its origin in the literature on the extractive and mining industry, but that has now become increasingly used in the sustainability literature across several different industries. We argued that SLO can better capture the criteria necessary for multiple and diverse stakeholders to collaborate and, mainly, to trust industry giants and therefore accept their operation in their communities. Indeed, we demonstrated that SLO can be a practice that, relying on and further developing normative criteria such as legitimacy, credibility and trust, would undoubtedly be significantly beneficial to ensure trustworthiness and public scrutiny on the decisions and actions of new digital business models. Overall, SLO could be a powerful social tool to induce such digital business models the adoption of responsible, sustainable and proactive business strategies. Acknowledgements The content of this publication has not been approved by the United Nations and does not reflect the views of the United Nations or its officials or Member States.
246
M. Capasso and S. Umbrello
References Amnesty International. 2014. Amnesty International Launches New App to Fight Attack, Kidnap and Torture. Amnesty International. https://www.amnesty.org/en/latest/news/2014/06/ amnesty-international-launches-new-app-fight-attack-kidnap-and-torture/ Ballew, Matthew T., Matthew H. Goldberg, Seth A. Rosenthal, Abel Gustafson, and Anthony Leiserowitz. 2019. Systems Thinking as a Pathway to Global Warming Beliefs and Attitudes Through an Ecological Worldview. Proceedings of the National Academy of Sciences 116 (17): 8214–8219. https://doi.org/10.1073/pnas.1819310116. Blair, Margaret M. 2008. The New Role for Assurance Services in Global Commerce. Journal of Corporation Law 325. https://scholarship.law.vanderbilt.edu/faculty-publications/30/. Boutilier, Robert G. 2017. A Measure of the Social License to Operate for Infrastructure and Extractive Projects. SSRN. https://doi.org/10.2139/ssrn.3204005. Boutilier, Robert G., and Ian Thomson. 2011. Modelling and Measuring the Social License to Operate: Fruits of a Dialogue Between Theory and Practice. Socialicense.Com. https://socialicense.com/publications/Modelling%20and%20Measuring%20the%20SLO.pdf Briscoe, Patricia. 2015. Global Systems Thinking in Education to End Poverty: Systems Leaders with a Concerted Push. International Studies In Educational Administration (Commonwealth Council For Educational Administration & Management (CCEAM)) 43 (3): 5–19. https:// www.academia.edu/34485128/Global_Systems_Thinking_in_Education_to_End_Poverty_ Systems_Leaders_with_a_Concerted_Push. Capasso, Marianna, and Steven Umbrello. 2021. Responsible Nudging for Social Good: New Healthcare Skills for AI-Driven Digital Personal Assistants. Medicine, Health Care And Philosophy 25: 11. https://doi.org/10.1007/s11019-021-10062-z. Cowls, Josh, Andreas Tsamados, Mariarosaria Taddeo, and Luciano Floridi. 2021. A Definition, Benchmark and Database of AI for Social Good Initiatives. Nature Machine Intelligence 3 (2): 111–115. https://doi.org/10.1038/s42256-021-00296-0. Culpepper, Pepper D., and Kathleen Thelen. 2019. Are We All Amazon Primed? Consumers and the Politics of Platform Power. Comparative Political Studies 53 (2): 288–318. https://doi. org/10.1177/0010414019852687. Daño, Neth. 2019. SDG 17. Can the Technology Facilitation Mechanism Help Deliver the Sdgs in the Era of Rapid Technological Change? Spotlight on Sustainable Development 2019. United Nations. https://www.2030spotlight.org/sites/default/files/spot2019/Spotlight_ Innenteil_2019_web_sdg17.pdf. Demuijnck, Geert, and Björn Fasterling. 2016. The Social License to Operate. Journal of Business Ethics 136 (4): 675–685. https://doi.org/10.1007/s10551-015-2976-7. Department of Health and Social Care. 2019. NHS Health Information Available Through Amazon’s Alexa. GOV.UK. https://www.gov.uk/government/news/ nhs-health-information-available-through-amazon-s-alexa. Di Vaio, Assunta, Rosa Palladino, Rohail Hassan, and Octavio Escobar. 2020. Artificial Intelligence and Business Models in the Sustainable Development Goals Perspective: A Systematic Literature Review. Journal of Business Research 121: 283–314. https://doi.org/10.1016/j. jbusres.2020.08.019. European Commission, Directorate-General for Communications Networks, Content and Technology. 2021. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts Com/2021/206 Final. European Commission. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021PC0206. Floridi, Luciano, Josh Cowls, J., Thomas C. King, and Mariarosaria Taddeo. 2020. How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics 26 (3): 1771–1796. https://doi.org/10.1007/s11948-020-00213-5
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
247
Gazzaneo, Lucia, Antonio Padovano, and Steven Umbrello. 2020. Designing Smart Operator 4.0 For Human Values: A Value Sensitive Design Approach. Procedia Manufacturing 42: 219–226. https://doi.org/10.1016/j.promfg.2020.02.073. Gehman, Joel, Lianne M. Lefsrud, and Stewart Fast. 2017. Social License to Operate: Legitimacy by Another Name? Canadian Public Administration 60 (2): 293–317. https://doi.org/10.1111/ capa.12218. Greene, Daniel, Anna Lauren Hoffmann, and Luke Stark. 2019. Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Hawaii International Conference On System Sciences. http://hdl.handle.net/10125/59651. Isaak, Jim, and Mina J. Hanna. 2018. User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer 51 (8): 56–59. https://doi.org/10.1109/mc.2018.3191268. Joseph, Lauren. 2018. Why the Tech Giants of Silicon Valley Must Rebuild Trust After Explosive Beginnings. World Economic Forum. https://www.weforum.org/agenda/2018/11/ why-move-fast-and-break-things-doesn-t-cut-it-anymore/. Kersting, Kristian, Zhao Xu, Mirwaes Wahabzada, Christian Bauckhage, Christian Thurau, Christoph Roemer, Agim Ballvora, Uwe Rascher, Jens Leon, and Lutz Pluemer. 2012. Pre- symptomatic Prediction of Plant Drought Stress Using Dirichlet-Aggregation Regression on Hyperspectral Images. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI’12), 302–308. AAAI Press. Komnitsas, Konstantinos. 2020. Social License to Operate in Mining: Present Views and Future Trends. Resources 9 (6): 79. https://doi.org/10.3390/resources9060079. Kotler, Philip, and Nancy Lee. 2005. Corporate Social Responsibility: Doing the Most Good for Your Company and Your Cause. Hoboken: Wiley. Longo, Francesco, Letizia Nicoletti, and Antonio Padovano. 2017. Smart Operators in Industry 4.0: A Human-Centered Approach to Enhance Operators’ Capabilities and Competencies Within the New Smart Factory Context. Computers & Industrial Engineering 113: 144–159. https://doi.org/10.1016/j.cie.2017.09.016. Ma, Wei, Kendall Nowocin, Niraj Marathe, and George H. Chen. 2019. An Interpretable Produce Price Forecasting System for Small and Marginal Farmers in India Using Collaborative Filtering and Adaptive Nearest Neighbors. Proceedings of the Tenth International Conference on Information and Communication Technologies and Development. https://doi. org/10.1145/3287098.3287100. Meesters, Marieke Evelien, and Jelle Hendrik Behagel. 2017. The Social Licence to Operate: Ambiguities and the Neutralization of Harm in Mongolia. Resources Policy 53: 274–282. https://doi.org/10.1016/j.resourpol.2017.07.006. Melé, Domènec, and Jaume Armengou. 2015. Moral Legitimacy in Controversial Projects and Its Relationship with Social License to Operate: A Case Study. Journal of Business Ethics 136 (4): 729–742. https://doi.org/10.1007/s10551-015-2866-z. Moffat, Kieren, Justine Lacey, Airong Zhang, and Sina Leipold. 2015. The Social Licence to Operate: A Critical Review. Forestry 89 (5): 477–488. https://doi.org/10.1093/forestry/cpv044. Morrison, John. 2014. Government Approval Not Enough, Businesses Need Social License. Ihrb.Org. https://www.ihrb.org/focus-areas/commodities/ government-approval-not-enough-businesses-need-social-license. Murphy, Colleen, Paolo Gardoni, Hassan Bashir, Charles E. Harris, and Jr., and Eyad Masad. 2015. Engineering Ethics For A Globalized World. Switzerland: Springer, Cham. Pasquale, Frank A. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press. Pollman, Elizabeth. 2019. Tech, Regulatory Arbitrage, and Limits. European Business Organization Law Review 567. https://scholarship.law.upenn.edu/faculty_scholarship/2567: 567. Quinn, John, Kevin Leyton-Brown, and Ernest Mwebaze. 2011. Modeling and Monitoring Crop Disease in Developing Countries. Proceedings of the AAAI Conference on Artificial Intelligence 25 (1): 1390–1395. https://ojs.aaai.org/index.php/AAAI/article/view/7811.
248
M. Capasso and S. Umbrello
Rahman, K. Sabeel. 2018. The New Utilities: Private Power, Social Infrastructure, and the Revival of the Public Utility Concept. Cardozo Law Review 39 (5): 101–171. http://cardozolawreview.com/the-new-utilities-private-power-social-infrastructure-and-the-revival-of-the-public- utility-concept/. Rahman, K. Sabeel, and Kathleen Thelen. 2019. The Rise of the Platform Business Model and the Transformation of Twenty-First-Century Capitalism. Politics and Society 47 (2): 177–204. https://doi.org/10.1177/0032329219838932. Reisman, Dillon, Meredith Whittaker, and Kate Crawford. 2018. Algorithms Are Making Government Decisions. The Public Needs to Have a Say. American Civil Liberties Union. https://www.aclu.org/issues/privacy-technology/surveillance-technologies/ algorithms-are-making-government-decisions. Sætra, Henrik Skaug. 2021a. A Framework for Evaluating and Disclosing the ESG Related Impacts of AI with the Sdgs. Sustainability 13 (15): 8503. https://doi.org/10.3390/su13158503. ———. 2021b. AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability 13 (4): 1738. https://doi. org/10.3390/su13041738. Sale, Hillary A. 2019. Social License And Publicness. SSRN. https://papers.ssrn.com/sol3/papers. cfm?abstract_id=3403706. Schulze, Elizabeth. 2019. If You Want to Know What a US Tech Crackdown May Look Like, Check Out What Europe Did. CNBC. https://www.cnbc.com/2019/06/07/how-google-facebook- amazon-and-apple-faced-eu-tech-antitrust-rules.html. Schwan, Gesine. 2019. Sustainable Development Goals: A Call for Global Partnership and Cooperation. GAIA – Ecological Perspectives For Science And Society 28 (2): 73–73. https:// doi.org/10.14512/gaia.28.2.1. Sharon, Tamar. 2016. The Googlization of Health Research: From Disruptive Innovation to Disruptive Ethics. Personalized Medicine 13 (6): 563–574. https://doi.org/10.2217/ pme-2016-0057. ———. 2021. From Hostile Worlds to Multiple Spheres: Towards a Normative Pragmatics of Justice for the Googlization of Health. Medicine, Health Care and Philosophy 24 (3): 315–327. https://doi.org/10.1007/s11019-021-10006-7. Shaw, James A., Nayha Sethi, and Christine K. Cassel. 2020. Social License for the Use of Big Data in the COVID-19 Era. Npj Digital Medicine 3 (1): 128. https://doi.org/10.1038/ s41746-020-00342-y. Smith, Matthew, and Sujaya Neupane. 2018. Artificial Intelligence and Human Development: Toward a Research Agenda. Ottawa: International Development Research Centre. http://hdl. handle.net/10625/56949. Srnicek, Nick. 2016. Platform Capitalism. Cambridge, UK: Polity. Thompson, Ian, and Robert Boutilier. 2011. Social License to Operate. In SME Mining Engineering Handbook, 1779–1796. Littleton, CO: Society for Mining, Metallurgy and Exploration. Thompson, Ian, and Susan Joyce. 2008. The Social Licence to Operate: What It Is and Why Does It Seem so Difficult to Obtain? Presentation, Prospectors and Developers Association of Canada Convention, Toronto. Tomašev, Nenad, Julien Cornebise, Frank Hutter, Shakir Mohamed, Angela Picciariello, Bec Connelly, Danielle C.M. Belgrave, et al. 2020. AI for Social Good: Unlocking the Opportunity for Positive Impact. Nature Communications 11 (1): 71. https://doi.org/10.1038/ s41467-020-15871-z. Truby, Jon. 2020. Governing Artificial Intelligence to Benefit the UN Sustainable Development Goals. Sustainable Development 28 (4): 946–959. https://doi.org/10.1002/sd.2048. Umbrello, Steven, and Ibo van de Poel. 2021. Mapping Value Sensitive Design Onto AI for Social Good Principles. AI And Ethics 1 (3): 283–296. https://doi.org/10.1007/s43681-021-00038-3. United Nations. 2015. Transforming Our World: The 2030 Agenda for Sustainable Development. Sdgs.Un.Org. https://sdgs.un.org/2030agenda.
Big Tech Corporations and AI: A Social License to Operate and Multi-Stakeholder…
249
van de Poel, Ibo. 2020. Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines 30 (3): 385–409. https://doi.org/10.1007/s11023-020-09537-4. Vargo, Deedra, Lin Zhu, Briana Benwell, and Zheng Yan. 2020. Digital Technology Use During COVID – 19 Pandemic: A Rapid Review. Human Behavior And Emerging Technologies 3 (1): 13–24. https://doi.org/10.1002/hbe2.242. Verbin, Ivri. 2020. Corporate Responsibility in the Digital Age. London: Routledge. Vernim, Susanne, Harald Bauer, Erwin Rauch, Marianne Thejls Ziegler, and Steven Umbrello. 2022. A Value Sensitive Design Approach for Designing AI-Based Worker Assistance Systems in Manufacturing. Vol. 200, 505. Procedia Computer Science. Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 10. https://doi.org/10.1038/s41467-019-14108-y. von Lücken, Christian, and Ricardo Brunelli. 2008. Optimal Crops Selection Using Multiobjective Evolutionary Algorithms. Proceedings of the Twentieth Innovative Applications of Artificial Intelligence Conference: 1751–1756. https://doi.org/10.1609/aimag.v30i2.2212. You, Jiaxuan, Xiaocheng Li, Melvin Low, David Lobell, and Stefano Ermon. 2017. Deep Gaussian Process for Crop Yield Prediction Based on Remote Sensing Data. In Proceedings of the Thirty- First AAAI Conference on Artificial Intelligence (AAAI’17), 4559–4565. AAAI Press. Zysman, John, and Martin Kenney. 2018. The Next Phase in the Digital Revolution. Communications of the ACM 61 (2): 54–63. https://doi.org/10.1145/3173550.
Part II
AIxSDGs: Existing and Potential Use Cases
A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks in Using AI Algorithms to Accomplish SDG 16.9 Mirko Forti
Abstract The unavailability of identification documents is a determining factor leading to social and economic exclusion for undocumented people. They cannot interact with public bodies and private subjects in an official way, so they cannot access services (healthcare, education, social welfare, etc.) or obtain formal employment. This sort of ‘identity gap’ between undocumented people and individuals with ID documents exacerbates socioeconomic discrepancies and inequalities and does not permit inclusive social development. Artificial intelligence represents a valid instrument in accomplishing the goal to provide legal identity for all. AI algorithms could take care of several related tasks, such as identity authentication/validation, data matching and storage. However, using AI tools to collect and manage identity-related data comes with risks and drawbacks worth mentioning. Social and cultural influences contribute to the development of personal identity that is not limited to official documents. Thus, AI-driven technologies deployed in identity management should also consider such elements in performing their tasks. This chapter argues for adequate human rights safeguards when deploying AI algorithms to manage identity-related data. More specifically, this contribution calls for human oversight mechanisms and the periodical recalibration of such algorithms to address mutating environmental variables in the development of personal identity. Keywords Identity · Artificial intelligence · Migrants · Non-discrimination · Person · Self
M. Forti (*) University of Tuscia, Viterbo, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_14
253
254
M. Forti
1 Introduction The unavailability of legal identity is an urgent issue. Civil registration offices may not be able to provide everyone with identification documents for several reasons: the lack of appropriate infrastructures to manage identity-related data, the impossibility to reach out to individuals in rural areas, natural disasters destroying public archives and persecutions on a discriminatory basis are only a few factors that could cause the exclusion of specific individuals from official identification and birth registration, especially in developing countries. Holding a legal identity is a prerequisite to being a recognised member of civil society. The lack of identification documents is a determining factor of social and economic exclusion. More specifically, undocumented people could not access the same rights and opportunities as any other individual (Gelb and Clark 2013). They cannot interact with public bodies or private entities, so they are unable to access services like healthcare, education, social welfare, formal employment and more. This sort of ‘identity gap’ between undocumented people and registered individuals exacerbates socioeconomic discrepancies and inequalities and does not permit inclusive social development. According to the Identification for Development (ID4D) programme, an initiative of the World Bank to address digital identity issues, about 1 billion people still do not have official proof of their identity (Global ID4D Dataset 2021). One in two women from low-income countries does not have ID, and they can’t be part of public society. Vulnerable segments of the population, like women or disabled persons, may face severe difficulties to obtain ID credentials (ID4D Annual Report 2020). The United Nations aims to provide every individual with a legal identity by the year 2030, according to the Social Development Goal (SDG) 16.9, overcoming such disparities and divergences. Emerging technologies, namely, artificial intelligence, could help national governments and public bodies to accomplish SDG 16.9. AI algorithms could take care of several related tasks, such as identity authentication/validation, data matching and storage. However, using AI tools to collect and manage identity-related data comes with risks and drawbacks worth mentioning. The working routine and inherent features of artificial intelligence could exacerbate already existing discriminatory profiles about identity issues. And, first of all, AI algorithms need to understand what is identity. This chapter addresses the regulatory environment and political framework of AI devices to provide legal identity for all. Its working hypothesis is that artificial intelligence could represent a valuable instrument in implementing SDG 16.9, but the deployment of AI instruments for identification purposes should be carried on accordingly to solid legal safeguards and in light of specific cultural and societal considerations. The first part of the chapter introduces the concept of identity. It highlights how this issue is not only a matter of identification documents but brings together several other aspects. More specifically, this section addresses how cultural, social and
A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks…
255
environmental variables could affect the inherent meaning and ontological structure of identity itself. The second part illustrates the role of AI algorithms in reaching SDG 16.9. More specifically, it will consider practical examples regarding the implementation of AI-based software to collect and manage identity-related data to investigate their human rights implications. This chapter will focus on the migratory context because of the importance of identification in the management of migration flows. Furthermore, undocumented people are an urgent concern: according to the International Organization for Migration (IOM), 10–15% of migrants hold an irregular status in the 2020.1 The last part of the chapter deals with the positive sides and potential drawbacks of using AI algorithms for identification purposes and provides a few legal and political recommendations to create a safe and secure environment for any individual.
2 The Concept of Identity: External Variables and Internal Dimension The concept of identity indicates the inherent features and elements that distinguish an individual or a social group from others (Al Tamimi 2018). Thus, it is a relational notion that addresses a specific frame of reference as a term of comparison. More specifically, identity explains relationships between members of society based on several variables like cultural background (cultural identity), nationality (national identity), ethnicity (ethnic identity) and religion (religious identity). Identity is the result of a human elaboration: how individuals think about themselves (Mutanen 2010). The notion of personal identity brings together the different environments and social groups in which individuals conduct their own lives. Thus, the construction process of self occurs in a specific framework that is not uniquely determined and could change over time (Mutanen 2007). A man could play several roles throughout his life: son, friend, husband, worker, dad and many others. However, he remains the same person through all these experiences. Philip Riley explains in this regard how personal identity is composed of two parts: person and self (Riley 2003). The first term addresses peculiar traits and elements that feature an individual in a specific social group. In other words, person indicates social identity. On the other hand, self refers to the intimate, subjective and personal characteristics of an individual: the inner core of a human being. On the same line, Stuart Hall distinguishes two approaches to the notion of identity (Hall 1990). The first one encompasses the inherent nature of a person or a community. As far as a social group is concerned, this approach indicates a shared frame of reference that identifies its members (e.g. common national origins, shared history). The second one intends identity as the result of a continual construction IOM World Migration Report, https://worldmigrationreport.iom.int/2020 (last access 23/12/2021).
1
256
M. Forti
process. Factors and variables like history, power and economy play a determining role in shaping the identity of an individual being or a social group. Its construction process finds its basis in the contraposition between the inner dimension and the external social space: person and self according to the words of Riley. Identities do not adhere to a shared and immutable essence. Elemental oppositions within the same framework of reference foster the development of a consciousness of self (Redman 2000). In other words, personal identity finds its significance from relations with different experiences and codes understood as ‘others’. Thus, the concept of ‘national citizens’ has its meaning in contrast with ‘immigrants’, likewise ‘religious people’ with ‘atheists’. This definition and construction process highlights the inherent precariousness of identity formation (Grossberg 1996). ‘Others’ can represent an element of instability for the inner meaning of a concept itself. There are several examples in different contexts in this regard; the idea of artificial intelligence is radically challenging the traditional definition of human intelligence, and, in the same way, migratory flows are transforming national identities. Identities find their meaning through a process of exclusion that places extraneous elements outside of a specific category. Notwithstanding, these alien features play a fundamental role in defining what is inside the identity label. Du Gay explains this mechanism as ‘constitutive outside’ (Du Gay 1996), while Al Tamimi talks about ‘excess of identity’ (Al Tamimi 2018). The decision about what is inside and outside the identity categorisation is an act of power. It is not the recognition of an objective and immutable state of nature but the construction of a hierarchical relationship between the elements taken into consideration (Laclau 1990). The dominating subject chooses what excesses a given framework of reference. Thus, the Ancient Romans considered foreigners ‘barbarians’ and Christians called Muslims ‘infidels’. This brief explanation indicates how external influences can shape the identification process of an individual or a social group. The concept of identity brings together self-awareness (how individuals perceive themselves) with social reputation (how other members of a collectivity consider a specific individual) (Jenkins 1997). According to this, different social, cultural and environmental backgrounds can generate different approaches to the idea of identity.
2.1 Different Societies, Different Identities Individualism is a prominent feature in the traditional paradigm of Western culture (Solomon 1994). Single persons maintain their peculiar individualities in respect of their social group (Johnson 1985). The idea of privacy intended as the ‘right to be let alone’ is a paradigmatic example in this regard (Warren and Brandeis 1890). In other words, the right to privacy is a safeguard for the individual from external influences (Westin 1968). Mauss argues that individual autonomy is a philosophical elaboration typical of Western thought (Leacock 1954). More specifically, the
A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks…
257
concept of human beings implies a moral status inherently linked to humanness. Thus, according to this vision, all the individuals are accountable for their actions. However, an individualistic approach is not the only one possible to address the notion of identity. Non-Western societies adopt different philosophical elaborations. The perspective of the African people of Tallensi focusses on the participation to social life to assess the role of a person in the public society (La Fontaine 1985). More specifically, they approve a life spent according to social conventions as a fundamental requirement for achieving a recognised personality in the social group of reference. Thus, according to the Tallensi, personhood is the result of a sum of statuses (being a husband, having children, etc.). Kemetic philosophy, that finds its origins in Ancient Egypt, holds that the individual is the centre of the community. Individuals find personal fulfilment as members of a collective and through actions aimed at social well-being (Karenga 1999). Likewise, the African philosophy Ubuntu addresses the individual as a functional part of the social group. The word Ubuntu has different meanings according to the specific cultural and social background, but it generally indicates values like solidarity and generosity (Kamwangamalu 1999). The Confucian tradition considers the human being as the centre of different social relationships and not as a separate unit (Ho 1995). According to this philosophy, humans should pursue the value of Ren, intended as kindness and benevolence, in order to be compassionate human beings (Cheng 1998). This brief analysis explains how the concept of identity is constantly changing and how external variables could shape it accordingly to mutating circumstances. National governments should carry on identification procedures for their citizens that address these peculiarities. In other words, the effort to accomplish SDG 16.9 of providing identification documents for everyone should consider that identity is not a static concept but an evolving idea. How to encapsulate something constantly changing in official documents? Emerging technologies could represent a valid answer in these regards. However, the deployment of technological solutions, mainly AI-based ones, for identification purposes shall respect the international human rights framework and consider the ever-changing nature of identity.
3 Artificial Intelligence and Identity AI-driven devices could carry on several identity-related procedures. More specifically, they can perform identity checks by managing interoperable databases. Furthermore, AI tools can conduct facial recognition operations through biometric data to identify undocumented people. These are only a few examples of how artificial intelligence could play a fundamental role in such a context and help national authorities to save time and money while carrying on identification procedures. Despite these possible advantages, the use of artificial intelligence technologies entails risks for the identity rights and free autonomy of the people involved. The deployment of AI-based devices in the migration context for identification purposes is a helpful case study to understand the potential impact of artificial
258
M. Forti
intelligence on the elaboration of identity. More specifically, this analysis investigates how AI-driven tools could shape self-awareness and how individuals perceive themselves in a context that brings together people from different cultural, social and environmental backgrounds. Furthermore, it will point out the human rights implications related to the use of such technologies.
3.1 AI-Based Identification Procedures in the Migration Context National authorities and international organisations commonly use AI tools to manage migratory flows and identify undocumented people. The Canadian government deploys AI technologies to assess immigrants’ applications to enter the national territory (Molnar and Gill 2018). The aim is to identify any potential signal of fraudulent declaration and risks for national security and public order. Algorithms collect and process data from multiple sources (e.g. applications, interviews during the migration management process, etc.) to provide authorities (e.g. border guards, tribunals, administrative courts) with outcomes that could help their decision-making process (e.g. granting a VISA). The United States border authorities deploy AI-based surveillance tools to patrol national borders with Mexico (Solon 2018). Drones, cameras and sensors help guards and agents to detect illegal cross-border movements. National governments and law enforcement agencies often justify AI technologies in managing migration flows to prevent terrorist attacks. The US Extreme Vetting programme has this goal (Glaser 2017). National governments and law enforcement agencies often justify AI technologies in managing migration flows to prevent terrorist attacks. The US Extreme Vetting programme has this goal. Automated decision-making systems assess third-country national applications to enter the US territory. AI algorithms can perform this evaluation task by collecting and processing data from different sources (public archives, national agencies databases, social media accounts) to identify any possible threat to national security. Thus, automated programmes conduct an evaluation process of applicants through the analysis of their features, peculiar traits and typical behaviours. The US government successively stops this migration management initiative due to its human rights implications (Root 2018). AI algorithms can perform law enforcement-related tasks. The AI-driven facial recognition software SARI, used by the Italian police, conducts automated researches through the AFIS (Automated Fingerprint Identification System) database to identify possible crime suspects. Automated decision-making systems can help national governments to manage asylum seekers and refugees on their territory. The Swiss government uses artificial intelligence algorithms to collect and process data describing the main features (population, job opportunities, access to services) of different locations to find the best place to relocate these individuals (Bansak et al. 2018). German border
A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks…
259
authorities deploy AI-based technologies for language recognition and real-time translation (Tangermann 2017). The aim is to foster collaboration and dialogue between third-country nationals and border guards and agents. Algorithms can also forecast future migratory movements and allow states and international organisations in managing flows as best as possible (Carammia and Dumont 2018). This brief analysis highlights how artificial intelligence plays a crucial role in managing migratory flows and, more specifically, identifying undocumented people in such a context. However, the use of AI algorithms could come with a cost. Lack of transparency, an unclear governance and accountability framework and potentially biased datasets are only a few AI-related risks that could have a tremendous impact on the human rights safeguards and the process of identity elaboration and development of individuals involved. A focus on how the European Union is currently using AI-based technologies in migration management and border control for identification purposes could be helpful to understand better the potential consequences of artificial intelligence on identity and address the regulatory gaps and political voids in this regard.
3.2 Artificial Intelligence for Identification Purposes: The EU Experience The European Union is carefully considering the deployment of technological devices to manage migratory flows and identify all the individuals entering its external frontiers. Several EU-funded research projects consider the use of AI tools to perform border control procedures, including identity checks. The Trespass project implements an AI-based surveillance system with a risk- based approach. In other words, AI algorithms assess all the entries in the national territory to individuate any potential signal of risk. The Roborder programme provides the EU Member States with an automated system of surveillance composed of robots, drones and driverless vehicles capable of operating on every surface. The Foldout project implements a platform of real-time surveillance through a system of sensors and cameras to detect any illegal cross-border movement. The EU also deploys AI-based software as lie detection systems to investigate potential mispractices in the management of migration flows. More specifically, the iBorderCtrl project aims to understand if artificial intelligence could find out false or contradictory statements made by migrants when dialoguing with EU border guards. An avatar of a policeman asks different questions about his journey to migrants while cameras are recording the interview. In the end, the software produces a QR code that points out if the interviewee tells the truth. In case of false declarations, human border guards can carry on additional controls. Thus, algorithms consider body language to detect any suspicious signal of lies. This working routine opens the floor to several consequences worth mentioning. More specifically, the unregulated use of AI algorithms could produce discriminatory results that may harm the identity of individuals.
260
M. Forti
However, the EU approach to the identification of undocumented people is not limited to these programmes but involves the implementation of an interoperability regime between digital archives containing personal data, including biometrics. This term indicates the identification of individuals through the automated assessment of their peculiar and inherent physical or behavioural features, like voice, fingerprints and face (Kloppenburg and Van der Ploeg 2018). The biometric identification procedure involves different phases. Firstly, the enrolment in the database consists of the creation of a reference image. More specifically, automated processes scan and collect morphological features of an individual and generate raw biometric data that transmit the captured biometric sample (Kloppenburg and Van der Ploeg 2018). This template contains only limited information that the pattern recognition algorithm analyses according to specified comparison points to recognise the enrolled person. The verification phase occurs when sensors capture another biometric record to compare with the stored biometric template. This comparison assesses the similarity between the two biometric samples. If the result of this examination is above a certain threshold, the biometric system recognises the given person (Jain et al. 2011). The recently released New Pact on Migration and Asylum2 includes the recasting of the Eurodac Regulation.3 New norms will allow law enforcement agencies and police forces to access EU-centralised biometric databases to conduct criminal investigations. The creation of the Central Identity Repository will make available the biometric records of millions of third-country nationals coming from outside Europe to such authorities. The use of biometric data for identification raises many questions and opens the door to possible abuse. In particular, there is a risk of over- policing of individuals belonging to minority groups. In general terms, the deployment of biometric processing activities may exacerbate existing vulnerabilities and exploit the already occurring hierarchical relationships between individuals.
4 AI-Based Devices for Identification Purposes: Ethical Concerns and Legal Issues This first part of the analysis illustrates some possible practical uses of AI-driven tools in the management of migratory flows. AI algorithms process data to provide authorities with the identification of an individual. More specifically, AI-based technologies could recognise individuals through the analysis of morphological features and externally visible patterns (Espin-Leòn 2020).
A brief summary of the New Pact on Migration and Asylum is available at the following link https://ec.europa.eu/info/strategy/priorities-2019-2024/promoting-our-european-way-life/new- pact-migration-and-asylum_en (last access 19/12/2021). 3 Regulation (EU) No 603/2013 of the European Parliament and of the Council of 26 June 2013 on the establishment of ‘Eurodac’ for the comparison of fingerprints. 2
A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks…
261
Floridi (2012) explains there is an ongoing and symbiotic relationship between the ontological self (what individuals truly are) and the epistemological self (what individuals think about themselves). Artificial intelligence can’t grasp such a diachronic relationship in its entirety because these technologies only encompass a static moment. In addition to that, AI algorithms are not able of investigating the inner core of human identity, called self according to the above-cited Riley. Thus, these identification operations address features related to external appearances (fingerprints, face features, voice, etc.) and do not consider inner elements like religious beliefs or political ideas. In other words, artificial intelligence may not be capable of formulating a complete and exhaustive picture of identity. However, AI-driven operations may have a tremendous impact on identity development. The deployment of specific technologies could have a shaping role in their surrounding working context (Kloppenburg and Van der Ploeg 2018). AI-driven devices produce their outcomes from the elaboration of already collected data. As far as identification procedures are concerned, such algorithms put individuals in specific categories according to the similarity of their data (Krupyi 2021a, b). Thus, AI-driven identification procedures produce new identities mediated by information related to numerous individuals. In other words, data referring to people already identified by the AI device shape successive identification operations. This working routine could lead to discriminatory outcomes and exploit existing power relationships. The next part of the chapter addresses these issues to understand the risks for identity development with a focus on human rights implications.
4.1 Artificial Intelligence and the Principle of Non-discrimination in the Context of Identification Operations The principle of non-discrimination is a cardinal norm of international human rights law. Several treaties (e.g. art. 2 Universal Declaration of Human Rights, UDHR; art. 21 EU Charter of Fundamental Rights, ECFR) prohibit the discrimination of individuals based on factors like gender, race or nationality. However, it is necessary to reflect on whether the current legal safeguards for non-discrimination are adequate to address the issues of the artificial intelligence era. The technical functioning of AI algorithms raises several issues that could impact people’s lives. The term black box barrier indicates the inherent lack of transparency and opaqueness of AI-driven software (Bathaee 2018). Human observers are unable to understand why a given stimulus (input) leads to a specific algorithmic response (output). In other words, the internal behaviour of the code remains unknown and incomprehensible to humans. As far as identification operations deploying artificial intelligence systems are concerned, the black box barrier
262
M. Forti
prevents individuals involved in such procedures from understanding the mechanisms that shape their identities. The black box issue can be a problem regarding the allocation of responsibilities. Algorithmic results may be a crucial element in guiding decision-making processes. The undetectability of the logical rationale behind an algorithmic output makes the decision based on that output equally incomprehensible. Thus, it would be impossible to understand the reasons behind administrative decisions (e.g. granting a VISA). As far as the migratory context is concerned, the black box barrier would prevent outside observers from identifying the reasons for categorising a migrant as a possible threat. The black box barrier prevents examination of the nature and correctness of the data sets processed by the algorithms. It is therefore difficult to detect any bias in this information. AI-driven software will produce results that amplify and propagate any biases embedded in the datasets. Introna and Nissenbaum try to open the black box of biometric recognition algorithmic software to conclude that its functioning depends crucially on which images the programmers use to train the software (Introna and Nissenbaum 2010). Prejudices and beliefs may guide the collection of such data and thus influence the functioning of the AI-driven identification procedures (Magnet 2011). Algorithmic outcomes addressing identity issues could influence how people perceive themselves (Krupyi 2021a, b). Artificial intelligence technologies operate according to embedded values and norms (Akrich 1992), so if a person does not fit within these principles, AI mathematical model labels him/her as ‘wrong’. Thus, individuals will presumably be pushed to think that signalled features are not right and will act accordingly to these perceptions (Johns and Fourcade 2020). The black box barrier may prevent external observers to detect such encompassed values. AI-driven systems deployed in identification and recognition procedures operate through the categorisation of people according to the similarity of their data. Thus, AI-based mathematical models immediately recognise and flag individuals who do not fit the statistical majority. This operation risks exacerbating already existing inequalities, as the further a given identity departs from the benchmarks, the more wrong AI-driven systems label it (Krupyi 2021a, b). It is, therefore, necessary to understand who decides the parameters and guidelines that determine the functioning of artificial intelligence algorithms and according to which criteria. The design and implementation of AI-driven identification procedures follow the dominant values and principles of the surrounding society. In this regard, Philips explains that algorithms may face difficulties individuating divergences between people of other ethnicities than ‘their own’ (Philips et al. 2011). In other words, the geographical location where the algorithm is developed may affect its ability to recognise faces from different ethnic backgrounds. Empirical analyses (Philips et al. 2011) show that algorithms developed in a Western context are more likely to identify people from such geographical locations. Likewise, software from the Far East can recognise more easily individuals coming from there. Pugliese explains how technological instruments deployed to collect data for algorithms may encompass biases and prejudicial attitudes. He brings the example of
A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks…
263
biometric scanning where according to him, cameras to record training images may better perform on white-skinned people (Pugliese 2010). The first paragraph of this chapter explains how identity development addresses a specific frame of reference. In other words, identity develops in a way that conforms to values and categories that may change according to circumstances. Artificial intelligence algorithms play a crucial role in identifying what falls within this framework. The operation of AI algorithms risks normalising the differences between subjects and flatting individuals to fit them into pre-elaborated statistical categories (Krupyi 2021a, b). Artificial intelligence can face severe difficulties in recognising identity features and differences between persons. More specifically, AI-driven software may not be able to address the several nuances and circumstances of reality. AI algorithms produce their outcomes through mathematical models and putting data into statistical categories. As far as gender dimension is concerned, this working approach may penalise individuals who do not perceive themselves as belonging to traditional gender groups. Likewise, people coming from mixed ethnic origins may face similar challenges (Krupyi 2021a, b). Thus, the framework of reference (factors like the geographical origin or the socio-cultural context) has an impact on the algorithmic working routine (Klare et al. 2012). Artificial intelligence technologies are not inherently neutral, but their impact depends on their design and implementation in the surrounding environment. Algorithms may therefore show a tendency towards certain characteristics and favour specific values (Maguire 2012). More specifically, this attitude can foster the exclusion of individuals belonging to minorities from the social context. Biased algorithmic results confirm the social order encompassed by the collected and processed data (Kloppenburg and Van der Ploeg 2018) leading the way to self- reinforcing discriminatory prejudices (Bechmann 2019). As explained before, algorithmic outputs inevitably influence the process of personal identity formation and how individuals perceive themselves and their possibilities to self-fulfilment. People who do not conform to the statistical majority will not have access to further social and economic opportunities because the algorithm will have identified them as ‘wrong’. In this regard, Niemann’s studies show that people identified and judged by algorithms as deviating from statistical models may experience low self-esteem and a lack of recognition of their human dignity (Niemann 2012). Thus, the collection of personal data is not a neutral exercise, but it is an activity with social and political repercussions. It is, therefore, necessary to analyse how the work of artificial intelligence in the field of personal identification may affect the privacy of the persons involved.
264
M. Forti
4.2 Artificial Intelligence, Privacy Rights and Identity Issues The right to privacy is an essential component of personal freedom and autonomy (Westin 1968). It protects the development of the human being from external interference. International Human Rights Law states the fundamental importance of the right to privacy and its influence in several contexts of everyday life (art.12 UDHR, art. 8 ECFR, art. 8 European Convention on Human Rights – ECHR). More specifically, data protection is an issue at stake in the digital era when technologies make it possible to collect and rapidly process an enormous amount of data to know everything about individuals. The General Data Protection Regulation4 (hereinafter GDPR) lists a series of data protection principles (art.5 GDPR). However, the implementation of some of these norms in the artificial intelligence context may be problematic. The black box barrier may prevent the application of the transparency principle. In other words, it may not be technically possible to understand how engineers design algorithmic databases. The implementation of the principle of transparency requires the feasibility of inspections addressing the algorithmic working routine (Goodman and Flaxman 2017). The principle of fairness requires that data processing procedures operate in a way that the persons involved can reasonably expect (Blasi Casagran 2021). The presence of biases in algorithmic reasoning can lead to unexpected and unpredictable outcomes. Algorithms require an increasing amount of new data to improve their analysis capabilities and better perform in their diagnostic tasks. However, the principle of data minimisation states that only as much information is processed as is necessary to achieve specific results. The purpose limitation principle provides that data collected for a specific aim should not be reused for any other goal. Nevertheless, data contained in the datasets are part of the algorithm knowledge.
5 Concluding Remarks and Recommendations The chapter aims to analyse the ethical implications and legal issues related to the deployment of AI tools in achieving SDG 16.9, i.e. providing every individual with a legal identity. Identification through digital means is an issue at stake in the COVID-19 era (The Economist 2020). Trusted digital ID platforms could help the management of resources towards the ones in need. In addition to that, collecting data about people with COVID-19 helps scientists and physicians understand possible developments of the virus.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC 4
A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks…
265
Thus, an ethical and legal analysis of the impacts of using emerging technologies for identification purposes could be helpful. This contribution explains how identity is not only a matter of official identification documents. Identity is a multiform concept that encompasses both the inherent and inner elements of individuals and how they perceive themselves within the social groups of reference. Identity development is an ongoing process that evolves and mutates according to the surrounding environment. However, AI-driven identification procedures can capture only a static moment of this ongoing process. In addition to that, issues like algorithmic biases could produce discriminatory outcomes and formulate untruthful identities. This chapter argues that artificial intelligence could represent a valid instrument in accomplishing SDG 16.9, but lawmakers and regulators should provide appropriate human rights safeguards. The analysis of the deployment of AI algorithms in the migration context for identification purposes highlights the potential risks involved in the use of such technologies. The recently released Artificial Intelligence Act5 proposal labels AI algorithms for identification tasks as high-risk devices. Thus, it provides for additional legal requirements to protect individuals from the adverse effects of these technologies. This chapter argues for the importance of human oversight mechanisms to supervise the functioning of AI-driven identification procedures. More specifically, it suggests that independent authorities should periodically recalibrate the algorithms to take the best possible account of the external variables mentioned above. In addition to that, this chapter calls for increased transparency about the design and implementation of algorithmic datasets. It suggests that experts (e.g. sociologists, anthropologists, etc.) from the environment where algorithms will act can supervise the collection of the information that will make up the dataset.
References Akrich, M. 1992. The Descriptiong of Technical Objects. In Shaping Technology/Building Society. Studies in Sociotechnical Change, ed. W. Bijker and J. Law, 205–224. Cambridge: The MIT Press. Al Tamimi, Y. 2018. Human Rights and the Excess of Identity: A Legal and Theoretical Inquiry into the Notion of Identity in Strasbourg Case Law. Social & Legal Studies 27 (3): 283–298. Bansak, K., et al. 2018. Improving Refugee Integration Through Data-Driven Algorithmic Assignment. Science 359 (6373): 325. Bathaee, Y. 2018. The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law and Technology 31: 889. Bechmann, A. 2019. Data as Humans. In Human Rights in the Age of Platform, ed. R.F. Jorgensen, 88. Cambridge. Blasi Casagran, C. 2021. Fundamental Rights Implications of Interconnecting Migrations and Policing Databases in the EU. Human Rights Law Review 21: 433.
Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS, COM/2021/206 final. 5
266
M. Forti
Carammia M., Dumont J.C., Can We Anticipate Future Migration Flows? 16 May 2018., https:// www.oecd.org/els/mig/migration-policy-debate-16.pdf. Last accessed on 10 Dec 2021 Cheng, C.Y. 1998. Transforming Confucian Virtues into Human Rights. In Confucianism and Human Rights, ed. W.T. De Bary and W. Tu, 154. New York: Columbia University Press. Du Gay, P. 1996. Who Needs Identity. In Questions of Cultural Identity, ed. P. Du Gay and S. Hall, 1–17. London: SAGE. Espin-Leòn, A., et al. 2020. Quantification of Cultural Identity Through Artificial Intelligence: A Case Study on the Waorani Amazonian Etnicity. Soft Computing 24: 11045–11057. Floridi, L. 2012. Technologies of the Self. Philosophy & Technology 25: 271–273. Gelb, A., and J. Clark. 2013. Identification for Development: The Biometrics Revolution. Center for Global Development Working Paper 315, January 2013. https://www.cgdev.org/publication/identification-development-biometrics-revolution-working-paper-315. Last accessed on 15 Nov 2021. Glaser A. 2017. ICE Wants to Use Predictive Policing for Its “Extreme Vetting” Program, 8 August 2017, Slate, https://slate.com/technology/2017/08/ice-wants-to-use-predictive-policing-tech- for-extreme-vetting.html. Last accessed on 10 Dec 2021 Global ID4D Dataset. 2021. https://id4d.worldbank.org/global-dataset. Last accessed on 16 Nov 2021 Goodman, B., and S. Flaxman. 2017. European Union Regulations on Algorithmic Decision-Making and a Right to Explanation. AI Magazine 38 (3): 50–57. https://arxiv.org/abs/1606.08813. Last accessed on 23 Dec 2021. Grossberg, L. 1996. Identity and Cultural Studies: Is That All There Is? In Questions of Cultural Identity, ed. P. Du Gay and S. Hall, 90. London: SAGE. Hall, S. 1990. Cultural Identity and Diaspora. In Identity: Community, Culture and Difference, ed. J. Rutherford, 222–237. London: Lawrence & Wishart. Ho, D.Y.F. 1995. Selfhood and Identity in Confucianism, Taoism. Buddism and Hinduism: Contrast with the West’ Journal for the Theory of Social Behaviour 25 (2): 115. ID4D Annual Report. 2020. https://documents1.worldbank.org/curated/en/625371611951876490/ pdf/Identification-for-Development-ID4D-2020-Annual-Report.pdf. Last accessed on 16 Nov 2021 Introna, L., and H. Nissenbaum. 2010. Facial Recognition Technology: A Survey of Policy and Implementation Issues, 4. New York University. Jain, A.K., et al. 2011. Introduction to Biometrics. New York: Springer. Jenkins, R. 1997. Rethinking Ethnicity: Arguments and Explorations. London: SAGE. Johns, F., and M. Fourcade. 2020. Loops, Ladders and Links: The Recursivity of Social and Machine Learning. Theory and Society 49: 803–832. Johnson, F. 1985. The Western Concept of Self. In Culture and self: Asian and Western Perspectives, ed. A.J. Marsella, G. Devos, and F.L.K. Hsu, 91. New York: Tavistock. Kamwangamalu, M.N. 1999. Ubuntu in South Africa: A Sociolinguistic Perspective to a pan- American Concept. Critical Arts: South-North Cultural and Media Studies 2: 24. Karenga, M. 1999. Sources of Self in Ancient Egyptian Autobiographies: A Kawaida Articulation. In Black American Intellectualism and Culture: A Social Study of African American Social and Political Thought, ed. J.L. Conyers, 37. Stanford. Klare, B., et al. 2012. Face Recognition Performance: Role of Demographic Information. IEEE Transactions on Information Forensics and Security 7 (6): 1789–1801. Kloppenburg, S., and I. Van der Ploeg. 2018. Securing Identities: Biometrics Technologies and the Enactment of Human Bodily Differences. Science as Culture 29 (1): 57–76. Krupiy T. 2021a. Understanding Digital Discrimination: Analysing Marshall McLuhan’s Work Through a Human Rights Lens, 1 April 2021, https://newexplorations.net/understanding- digital-discrimination-analysing-marshall-mcluhans-work-through-a-human-rights-lens-2/. Last accessed on 17 Dec 2021. ———. 2021b. Why the Proposed Artificial Intelligence Regulation Does Not Deliver on the Promise to Protect Individuals from Harm, 23 July 2021. https://europeanlawblog.
A Legal Identity for All Through Artificial Intelligence: Benefits and Drawbacks…
267
eu/2021/07/23/why-the-proposed-artificial-intelligence-regulation-does-not-deliver-on-the- promise-to-protect-individuals-from-harm/. Last accessed 17 Dec 2021 La Fontaine, J.S. 1985. Person and Individual: Some Anthropological Reflections. In Culture and Self: Asian and Western Perspectives, ed. A.J. Marsella, G. De Vos, and F.L.K. Hsu, 189. New York: Tavistock. Laclau, E. 1990. New Reflections on the Revolution of Our Time, 90. London: Verso. Leacock, S. 1954. The Ethnlogical Theory of Marcel Mauss. American Anthropologist New Series 56 (1): 58–73. Magnet, S. 2011. When Biometrics Fail: Gender, Race and the Technology of Identity, 50. Durham: Duke University Press. Maguire, M. 2012. Biopower, Racialization and New Security Technology. Social Identities: Journal for the Study of Race Nation and Culture 18 (5): 593–607. Molnar P., and L. Gill. 2018. Bots at the Gate. A Human Rights Analysis of Automated Decision- Making in Canada’s Immigration and Refugee System. https://tspace.library.utoronto.ca/bitstream/1807/94802/1/IHRP-Automated-Systems-Report-Web-V2.pdf. Last accessed on 9 Dec 2021 Mutanen, A. 2007. Deliberation – Action – Responsibility: Philosophical Aspects of Professions and Soldierships. In Ethical Education in the Military: What, How and Why in the 21st Century? ed. J. Toiskallio, 124–150. Helsinki: ACEI. ———. 2010. About the Notion of Identity. LIMES 3 (1): 28–38. Niemann, Y.F. 2012. The Making of a Token: A Case Study of Stereotype Threat, Stigma, Racism and Tokenism in Academia. In Presumed Incompetent: The Intersections of Race and Class for Women in Academia, ed. G.G. Muhs et al., 336–355. Utah State University Press. Philips, P.J., et al. 2011. An Other-Race Effect for Face Recognition Algorithms. ACM Transactions on Applied Perceptions 8 (2): 14. Pugliese, J. 2010. Biometrics: Bodies, Technologies, Biopolitics, 57. New York: Routledge. Redman, P. 2000. Introduction. In Identity: A reader, ed. P. Du Gay, J. Evans, and P. Redman, 9–14. London: SAGE. Riley, P. 2003. Self Access as Access to Self: Cultural Variations in the Notions of Self and Personhood. In Learner Autonomy Across Cultures, ed. D. Palfreyman and R.C. Smith, 92–109. London: Palgrave Macmillan. Root, B. 2018. US Immigration Officials Pulls Plug on High-Tech Extreme Vetting, 28 May 2018, https://www.hrw.org/news/2018/05/18/us-immigration-officials-pull-plug-high-tech-extreme- vetting. Last accessed on 10 Dec 2021. Solomon, R.C. 1994. Recapturing Personal Identity. In Self as Body in the Asian Theory and Practice, ed. T.P. Kasulis, R.T. Aimes, and W. Dissanayake, 7. Albany: State University of New York Press. Solon, O. 2018. Surveillance Society: Has Technology at the US-Mexico Border Gone Too Far?, 13 June 2018, The Guardian, https://www.theguardian.com/technology/2018/jun/13/mexico- us-border-wall-surveillance-artificial-intelligence-technology. Last accessed on 10 Dec 2021 Tangermann, J. 2017. Documenting and Establishing Identity in the Migration Process. Challenges and Practices in the German Context, 27 September 2017, https://www.bamf.de/SharedDocs/ Anlagen/EN/EMN/Studien/wp76-emn-identitaetssicherung-feststellung.html?nn=282388. Last accessed on 23 Dec 2021 The Economist. 2020. Covid-19 Spurs National Plans to Give Citizens Digital Identities, 7 December 2020, https://www.economist.com/international/2020/12/07/covid-19-spurs- national-plans-to-give-citizens-digital-identities. Last accessed on 23 Dec 2021 Warren, S.D., and L.D. Brandeis. 1890. The Right to Privacy. Harvard Law Review 4: 193. Westin, A.F. 1968. Privacy and freedom. Washington and Lee Law Review 25 (1): 166.
Socially Good AI Contributions for the Implementation of Sustainable Development in Mountain Communities Through an Inclusive Student-Engaged Learning Model Tyler Lance Jaynes, Baktybek Abdrisaev, and Linda MacDonald Glenn Abstract AI is increasingly becoming based upon Internet-dependent systems to handle the massive amounts of data it requires to function effectively regardless of the availability of stable Internet connectivity in every affected community. As such, sustainable development (SD) for rural and mountain communities will require more than just equitable access to broadband Internet connection. It must also include a thorough means whereby to ensure that affected communities gain the education and tools necessary to engage inclusively with new technological advances, whether they be focused on machine learning algorithms or community infrastructure, as they will be increasingly dependent upon the automational capabilities of AI. In this essay, an exploration will be conducted into the means whereby student-engaged learning (SEL) can effectively be utilized to provide targeted, inclusive education to rural and mountain communities regarding the implications of AI for SD.
T. L. Jaynes (*) Alden March Bioethics Institute at Albany Medical College, Albany, NY, USA Department of Philosophy & Humanities, College of Humanities and Social Sciences, Utah Valley University, Orem, UT, USA e-mail: [email protected] B. Abdrisaev Department of History and Political Science, College of Humanities and Social Sciences, Utah Valley University, Orem, UT, USA e-mail: [email protected] L. M. Glenn Alden March Bioethics Institute at Albany Medical College, Albany, NY, USA Center for Applied Values and Ethics in Advanced Technologies (CAVEAT), Crown College, University of California Santa Cruz, Santa Cruz, CA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_15
269
270
T. L. Jaynes et al.
Keywords Artificial intelligence · Bioethics · Inclusive student-engaged learning · Mountain communities · Non-traditional students · Sustainable development goals
1 Introduction “Earth and sky, woods and fields, lakes and rivers, the mountain and the sea, are excellent schoolmasters, and teach of us more than we can ever learn from books” (Lubbock 1894, 70). “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled” (Feynman 1988, 237). These two quotes encapsulate the spirit of this collaborative book, in the opinion of these authors, and emphasize the importance of a holistic perspective which recognizes that humanity is part of a larger interconnected system that creates and sustains our civic obligations to one another. Cognizance of this integration requires an incorporation of the natural landscape into our considerations for the development and use of new technologies because its recognition may seem trivial in the grander scheme of things. Socially good values should, therefore, include the environments wherein communities reside and the history that is attached to those immortal and evolving vistas that define their landscape. So too then are considerations for our natural world vital to a broader conversation on the means whereby artificial intelligence (AI) can play a role in the global attainment of the United Nations (UN) 2030 Agenda for Sustainable Development (hereafter “the 2030 Agenda”), as reflected in the 17 UN Sustainable Development Goals (SDGs) described in their 2015 resolution (UN General Assembly [UN GA] 2015). In this chapter, we contend that community-based education on and with AI positively impacts the ability of mountain communities to achieve their attainment of the 2030 Agenda’s Goals as a population that is uniquely adapted to harsh natural conditions (as defined by high elevations and microclimate generation). We will defend this stance with the use case of inclusive educational programs involving representatives of mountain communities—a subset of rural communities, as is generally understood—and how their success has led to a more robust response to the 2030 Agenda at home and abroad. Programs, when implemented under frameworks similar to those discussed herein, create learning conditions that satisfy the ethical requirements lauded by researchers internationally for socially good AI (Reidl 2019; Shneiderman 2020; Li 2021) in the eyes of these authors. They furthermore ensure that the outcomes of engaged and inclusive student learning, specific to the practical implementation of IoT and AI usage and development, are based on human-centered and socially good principles. As a note, the implementation of similar programs will require more than the assurance of equitable and stable access to broadband Internet connection. Any replicated effort must also ensure that affected communities will inclusively engage with new technological advances through effective and affordable education and resources. These stipulations are necessary to iterate because of the volatile nature of AI development, which will inevitably result in increased communal dependence
Socially Good AI Contributions for the Implementation of Sustainable Development…
271
on advanced systems’ autonomous capabilities and a greater range of “smart” and interoperable devices (Jaynes 2021a, b, d) by communities that have greater familiarity and access to the tools driving these innovations.
2 History and Background As other contributors to this volume argue, AI is increasingly becoming an area of focus for effective goal attainment in the 2030 Agenda because of the efficiency that results from its usage. Since AI is dependent upon the Internet of Things (IoT) to handle the massive amounts of data it requires to function effectively, one of the major ethical issues that arise for rural and mountain communities is reliable Internet access. Urban communities with fairly stable connectivity to IoT are thriving, but those who inhabit regions which are predominantly rural worldwide are being left behind (Durish 2020). In addition to a lack of access and connectivity instability (Su 2020, 58–59), the populations in many of these areas have yet to gain a basic understanding of how IoT is so drastically changing workflows and information distribution, among other topics, due to a lack of connections and training (Durish 2020; France-Presse 2021). If better-connected communities continue to neglect these populations, and the reported 37% of the human population that has never had the Internet (France-Presse 2021)—either deliberately or unintentionally—existing inequities and inequalities will only continue to expand exponentially as AI and IoT-based technologies gain in ability and sophistication. A lack of awareness is significant because of the divides that exist from economic underdevelopment, unstable or unreliable access to IoT, and a dearth of proper education between areas with access and those with limited or no access. One can hardly be expected to attain an understanding of how a tool works if one’s access to that tool is restricted or wholly out of reach because of the natural features that make up their place of residence. The digital divide is not growing simply because of Internet connectivity issues—many communities (mountainous and rural) and their residents do not, or cannot, have access to these information technology (IT) and information system (IS) architectures that maintain AI’s effectiveness (Bissell 2004; Brescia and Daily 2007; Pick et al. 2015; Bürgin and Mayer 2020; Iqbal et al. 2021). IT and IS have been able to contribute to the economic development in mountainous areas through telemedicine, distance education, tourism promotion, and targeted marketing of local products when the architectures and infrastructures are provided (Brescia and Daily 2007; Price 2013). Yet the advance of modern communication technologies—including AI and the IT and IS supporting it—into the most remote parts of the mountainous world deepens the alienation of local communities present there from the national polity in part because assumptions are made regarding the “ease” whereby IoT operating knowledge is acquired (Starr 2004; Bürgin and Mayer 2020; Iqbal et al. 2021). Other factors play into the struggle to grant IoT access to every member of the human species, such as naturally occurring dead zones in mountain ranges and
272
T. L. Jaynes et al.
deserts that persist even with targeted cell tower installation, but none are as complex as the fundamental understanding of safe IoT usage and the rights held by individuals utilizing IoT-based and AI services. The reality remains that individuals are often at the mercy of corporations that often self-determine what these rights may be (as can be publicly seen in the lawsuits being levied against Google and its parent company, Apple, and the corporation formerly known as Facebook). Therefore, IoT and AI use based on widely accepted principles of “social good” for communities struggling to attain stable access presents an important priority for the implementation of the 2030 Agenda. Marr (2021) outlines the main requirements for an ethical application of AI within any institution as raising awareness through education, transparency, inclusiveness, and following established rules (to name a few); though similar statements have been iterated elsewhere (Reidl 2019; Shneiderman 2020; Li 2021).
3 Why Focus on Sustainable Development in Mountains? Where mountainous communities are struggling to meet the Goals set out by the 2030 Agenda because of the unique circumstances generated by the land and natural conditions they live within (UN GA 2019), a targeted focus on these populations is absolutely necessary. As stated in the UN Secretary General report to the General Assembly from July 22, 2019, approximately 27% of the world’s landmass is made up by mountainous regions, and 14% of the human population resides in these areas. Furthermore, the report states that: …mountains are key ecosystems that provide humanity with essential goods and services such as water, food, biodiversity and energy. However, mountain ecosystems are vulnerable to natural disasters, climate-related events and unsustainable resource use…Identifying new and sustainable livelihood opportunities and adopting practices that build the resilience of people and environments in mountain areas is an urgent requirement for achieving the Sustainable Development Goals. (UN GA 2019, 1)
A warming climate have dramatic impacts for regional water ecosystems, even if they are not actively perceived due to changes in regional atmospheric moisture capture and gradual adjustments to regional and international air currents. These gradual—albeit accelerated—changes result from the raised ambient temperature of natural features and systems (e.g., canyons, forests, lakes, seas, valley basins), or the impact of wind-channeling structures in flatland areas (e.g., dams, roadside windbreaks, sea walls, wind turbines, skyscrapers). These factors are leading to decreases in terminal water body size, drastic changes in water body nutrient density that have a chain effect on local biospheres, and fluctuations in the soil’s ability to retain water—which has the compound effect of increasing the damage of landslides, impacting the ability of biomass to resist burning via growth in dead biomass and loss of natural defense mechanisms, and the prevention of rainfall from being fed into local water tables to supplement local vegetation (Wagner 2007; Suzuki 2011; Baxter and Butler 2020; Chen et al. 2020; Jara et al. 2021).
Socially Good AI Contributions for the Implementation of Sustainable Development…
273
Beyond biosphere concerns, which include the reality that higher elevations experience warming at different rates than lower elevation areas (Wilkins et al. 2021), there are related concerns that a loss of terminal lake volume will contribute to declines in population health from those at the receiving end of dust storms that pass through dry lake and riverbeds (Baxter and Butler 2020; Romero 2021). This, of course, includes the impacts local ecosystems will face with the lack of moisture being provided by these terminal water bodies that may be highly region specific, as is the case for the Great Salt Lake and Aral Sea—among others—which directly impacts all communities that source their water resources from the tributaries feeding these terminal water bodies in a myriad of ways. Advances in climate monitoring via AI would greatly aid local communities relying on the streams, tributaries, and rivers feeding these terminal lake bodies in their efforts to allocate water rights and conserve water usage while balancing the needs of tribal populations, “immigrant” populations, and the agriculture that sustains their economies, but many projects emphasize on the needs of metropolitan areas or non-mountainous rural locales which are variably impoverished (Chien et al. 2012; Thapa and Sæbø 2014; Pick et al. 2015; Kumagai 2020). As a result, mountain communities worldwide experience inordinate challenges with implementations of the SDGs. A recent study published by the Food and Agriculture Organization of the United Nations (FAO) adds further evidence to this claim. In mountainous regions of developing countries, issues of food insecurity, social isolation, environmental degradation, exposure to the risk of disasters and to the impacts of climate change, and limited access to basic services—especially in rural areas—are still prevalent and, under some circumstances, are increasing (Romeo et al. 2020).
3.1 Mountain-Focused IT and IS Specialization Initiatives Globally, many mountain communities have been successfully bringing wealth into their locals through targeted specialization in IT and IS sectors—Silicon Valley being the primary example of this phenomenon in the USA. Other major technology centers worldwide can be found in the mountainous communities of Auckland, Bangalore (Bengaluru), Bogotá, Cape Town (Kaapstad, iKapa), Dublin, Kigali, Kuala Lumpur, Madrid, Mexico City, Munich (München), Nairobi, Salt Lake City, Santiago, São Paulo, Sydney, Taipei, Tokyo (Tōkyō-to, including Tama-chihō, Izu- shotō, and Ogasawara-shichō), and Vancouver (Giuliani and Ajadi 2019; Leskin 2019; López 2020).1 Notwithstanding that cities in the Caribbean only start 1 Data also collected from https://www.startupblink.com/ (accessed November 29, 2021) with results selected from the top 300 cities to provide a more international framing of “technology start-up-friendly” environments in mountainous regions. Respective rankings were as follows on the access date: Bangalore (10), Tokyo (15), São Paulo (20), Sydney (36), Munich (38), Taipei (41), Vancouver (42), Madrid (45), Mexico City (50), Dublin (51), Salt Lake City (55), Santiago
274
T. L. Jaynes et al.
appearing in lower ranks on account of data gaps,2 the trend remains that “techfriendly” environments are primarily found in those areas with greater economic investment either towards direct start-up development, foreign-worker relocation, or literacy training for employees residing beyond a corporation’s national borders (Chien et al. 2012; Thapa and Sæbø 2014; Pick et al. 2015; Kumagai 2020). The concern here is that many of these hubs remain in the 136 nations (per the UN’s list of recognized nations) that have yet to adopt governance frameworks or principles to handle AI. As of May 11, 2021, 32 countries and the EU have established initiatives to govern, legislate, and research means to responsibly handle the development, implementation, use, and termination of AI systems per the Organisation for Economic Co-operation and Development (OECD) AI initiative (2021).3 This then implies that those nations which are still developing plans, or organizing funds to sponsor development initiatives, will inevitably have to ensure that their policies fall in line with those that are already established by nations that, by some accounts, are “preemptive.” Realistically, these early-adopting nations are wealthy enough to invest in AI research that continues to push the “state-of-the-art” forward, and therefore force a baseline to be set ahead of international collaboration efforts that can be discussed or pursued. Assuming that this comes to pass, as it has with other related initiatives to govern the use of new technologies, there is a nonzero chance that neo-colonialist mentalities will vie for supremacy with de- colonialist frameworks that have been adopted by various nations through governmental reforms over the past century—thereby generating a hostile environment that makes international standardization efforts nigh impossible to pursue and transnationally compliant, socially good AI an unattainable service. The recent motion by the UN Educational, Scientific and Cultural Organization (UNESCO) to adopt the “first global ethical framework for the use of [AI]” (Gaubert 2021) is a positive step forward to gain global consensus on how this group of technologies should be managed (UNESCO 2021). Yet similar issues exist in that UNESCO is not a body with universal legal authority. That is not to say that their recommendations will go unheard by the international community, but that a universal adoption of the draft recommendations will be difficult to implement for those nations that struggle to keep up with the myriad of ways AI is evolving. These uses include a great deal more than traditional data mining—which is easier to adapt towards for those in the IT and IS industries currently—and will likely include the use of AI in extended reality technologies that support the Metaverse (Jaynes 2021a, d), assistive bionic prosthetics that may challenge our current notions of legal (70), Bogotá (77), Kuala Lumpur (80), Auckland (105), Nairobi (136), Cape Town (145), and Kigali (265). 2 San Juan, Puerto Rico (347), is the first example, followed by Kingston, Jamaica (685), and Montego Bay, Jamaica (958), per the above site rankings. 3 EU member states were not counted twice, though many have chartered independent actions to regulate AI before the EU Parliament’s actions to develop a unified framework in April 2021. See https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX% 3A52021PC0206
Socially Good AI Contributions for the Implementation of Sustainable Development…
275
personhood and citizenship (Glenn 2018; Jaynes 2021b, c), and other “advanced” applications related to high technology. These will not only challenge our interpretation of what “ablement” entails for labor purposes, but also what fundamental rights should be extended to already able-bodied individuals and the limits of equality and inclusiveness (Glenn 2012; Jaynes 2021b, c). While not presently a great focus by the international community on account of the supporting infrastructural needs of these other applications, it cannot be denied that considerations of this nature are part of a socially good application of AI and should therefore be examined as these infrastructural needs are developed and deployed. Furthermore, it should be stated that many mountainous communities are often ill-equipped to train AI engineers in environments similar to those that they will be exposed to in the workplace. High-tech start-ups are free to structure their work environments as their budgets and office space allow because of how new their institution is relative to the community they may inhabit. Universities, on the other hand, commonly have to retrofit buildings that they have housed for decades on budgets that are much more limited, or otherwise constrained by local building code restrictions that did not account for accelerated advances in communications technologies. Part of this is the direct result of the difficulties in drilling for fiber-optic connections in mountainous and island regions (Canevaro 2018; Engel-Smith 2021) and naturally occurring cell phone and Wi-Fi dead zones, but is exacerbated by the fact that communities living on tectonic fault lines are at threat to seismic and volcanic activity (outside of other potential issues like hurricanes and tsunamis). As such, these regions require particular consideration when discussing the development of AI regulation because they may not have technologically savvy populations that can articulate the needs of their communities.
3.2 Contributions of Education to the Implementation of the Agenda Based on Socially Good Principles The State of Utah, along with 17 states in the USA, recently adopted legislation considering benefits and challenges of AI. What distinguished Utah S.B. 96 from those adopted in other states is that it “creates a deep technology talent initiative within higher education” (Utah State Legislature 2020). Although the University of Utah was able to serve as one of the earliest nodes to public Internet services in the USA (Tanner 2021), the recent push to promote the Silicon Slopes initiative (Pagano 2017; Campbell 2018; Clark 2020) has been rapidly displaying the inability of local universities to keep up with the demand for jobs that handle AI and socially good AI analysis (O’Toole 2021). In truth, many Utah campuses have been expanding in the past decade like many others across the nation. Yet the historical trend of Utah being a “labor export” state has resulted in an educational environment where expansions have been restricted to professions popular in other parts of the country, sports (to maintain PAC-12 status), or medicine (specifically expansions of Intermountain
276
T. L. Jaynes et al.
Healthcare-related facilities) while being unable to address industries that have less of an impact on Utah’s economy even as they saw rapid expansion (Campbell 2018; Tanner 2021). The goal of reinvigorating AI-related education is not limited to the University of Utah or Utah Valley University (UVU). They include other schools serving mountainous communities, such as the University of California - Santa Cruz with their efforts to establish the Center for Applied Values and Ethics in Advanced Technologies (CAVEAT) and the Kyrgyz School of Data (among hundreds of similar initiatives). The challenge often remains, however, in being able to employ these newly trained workers in local communities when non-mountainous cities or nations develop favorable policies or work environments that cannot be adequately matched (Meisenzahl 2019; Rose 2020; Rosalsky 2021). Hence, the rationale for developing Utah S.B. 96 was to create a new pathway for local businesses and universities to secure emerging talent through direct-hire programs via educational training and other related projects (Utah State Legislature 2020). The cooperation between UVU, located in Orem, Utah, and the International University of Kyrgyzstan (IUK) from Bishkek, the Kyrgyz Republic, presents an example of a joint, human-centered educational program to implement the 2030 Agenda with focus on sustainable mountain development (SMD) based on socially good principles (Reidl 2019; Shneiderman 2020; Li 2021), which is made apparent annually through a joint implementation of the UN GA resolution “International Year of Mountains, 2002” (UN GA 2003; Price and Kohler 2013). Historically, the program arose from a 1999 partnership between developed and developing mountain communities from the State of Utah and the Kyrgyz Republic, respectively (Abdrisaev et al. 2020a, b). This partnership allowed for Utahns to share with their Kyrgyz partners unique local experiences in building one of the most successful economic models in the USA. Special emphasis was made on the role and contribution of educational institutions like UVU to that model, including with IoT use (Abdrisaev et al. 2005; Abdrisaev et al. 2011). As a next step in this direction and implementation of the 2002 UN GA resolution recommendation, UVU joined the FAO Mountain Partnership (MP) in 2006 as the first academic institution in North America (UN GA 2003, FAO MP n.d.-a). In turn, the Kyrgyz side provided to their Utahn partners their own knowledge and networking opportunities to pursue SMD at the UN—in particular by being one of the main initiators of the IYM celebration under the UN GA resolution (Price 2004, 3) and on a bilateral basis through the UVU faculty and students’ involvement in the initiatives and programs of the Embassy of the Kyrgyz Republic to the USA. For its part, UVU was established in 1941 as a trade school to serve the needs of local communities along the Wasatch Mountain range in the Rocky Mountain region. Through its dual-mission education, UVU today serves as an integrated community college and regional teaching university (“Vision 2030” 2020). 88% of UVU students are Utah residents (UVU Institutional Research Department 2019), and 80% of them are employed as they pursue their education whether locally or through tele-work that keeps them in-state (Whittney 2020). In line with the trend
Socially Good AI Contributions for the Implementation of Sustainable Development…
277
of the student population in the USA and Europe (Hauschildt 2015), 30% of the UVU student body is represented by non-traditional or adult students (Ho-Wisniewski 2020). This category of students is usually in the range of 25 and 75 years of age while enhancing or changing careers. The majority of them also work full or part time and may support families or relatives (Pelletier 2010; Tuminez 2020; Whittney 2020). Adult students are designated as learners who experience social or educational disadvantages and may have interests and values which differ from their traditional peers (Wyatt 2011). The joint partnership between UVU and the IUK within the FAO MP has created a means for both institutions to strengthen the socially good nature of their activities by involving faculty and students in several different ways across their respective campuses. For UVU’s part, their involvement matches the institutional mission of the school (UVU 2020) while addressing many livelihood-related aspects of the local population. By engaging with students and faculty from the IUK, the UVU community has been able to share local experiences in SMD and related policy through UN-sanctioned activities that help to distinguish the unique cultural differences that exist between the Kyrgyz and Utahn populations.
4 Goals and Targets Related to SMD Of particular note, the 2030 Agenda designated Goal Targets 6.6 ([to]…protect and restore water-related ecosystems), 15.1 (ensure the conservation, restoration and sustainable use of terrestrial and inland freshwater ecosystems), and 15.5 (reduce the degradation of natural habitats, halt the loss of biodiversity and…protect and prevent the extinction of threatened species) for SMD within their total framework of 17 SDGs and 169 Targets (UN GA 2015). The implementation of SMD globally is coordinated by the FAO MP, which has been in operation since 2003 as a subunit of the organization (FAO MP n.d.-a), for the express purpose of ensuring that the significance mountainous regions hold for global ecosystems and sustainable living are neither neglected nor forgotten. The UN GA resolution proclaiming 2002 the International Year of Mountains (IYM) further recommends that all stakeholders worldwide interested in the promotion of SMD to join the FAO MP (UN GA 2003). As a result of these targeted, coordinated efforts, the FAO MP now has more than 400 members, including intergovernmental organizations, mountain states, academic institutions, non-governmental entities, and others that do not necessarily exist in rural or mountainous regions (FAO MP n.d.-a; Abdrisaev et al. 2020a, b). Beyond the Targets designated for SMD, we cannot ignore the importance SMD holds to the attainment of other Goals and Targets within the 2030 Agenda. These include those Targets found in Goals 1 (No Poverty), 4 (Quality Education), 5 (Gender Equality), 6 (Clean Water and Sanitation), 8 (Decent Work and Economic Growth), 9 (Industry, Innovation and Infrastructure), 10 (Reduced Inequalities), 11 (Sustainable Cities and Communities), and 15 (Life on Land) and Targets 2.3, 2.4, 3.9, 7.1, 7.b, 12.2, 12.4, 12.7, 12.8, 12.b, 13.1, 13.3, 13.b, 14.1, 14.2, 14.3, and 17.6
278
T. L. Jaynes et al.
(UN GA 2015). By ensuring that mountainous communities can participate on a more equal playing field with flatlands-based urbanized and metropolitan areas through particular education with and on AI, we can enable comprehensive discourses on how best to manage regional resource collection and distribution while preventing substantial “brain drain” to areas of higher population density (Bausch et al. 2014; Khan and Somuncu 2019; Bürgin and Mayer 2020). Not only does this enable mountainous communities to gain income from jobs created within their unique livelihoods that would otherwise be sourced into other communities, but it also prevents the loss of workers skilled in agriculture, forestry, mining, and other mountain-specific industries that cannot always be found in non-mountainous regions, and locally related native population practices that can be better for sustainable living in the long term (Mukhopadhyay et al. 2020; Silversmith 2021; Spoon et al. 2021).
4.1 Inclusive Student-Engaged Learning as a Foundation for a Socially Good Implementation of SMD Since 2011, UVU further enhanced its involvement with the IUK, the FAO MP, and other global mountain communities by developing a model in which students can play a major role in promoting SMD in the State of Utah and elsewhere through the student-engaged learning (SEL) model. The SEL model is based on four principles as described by Burch (2000) under a different acronym, being: 1. Students are asked to study real world problems. 2. Students investigate the presented problem as a group, in a collaborative way. 3. Teachers facilitate the students’ self-learning. 4. Students are made responsible for their self-learning and implementation of the studied problem. To ensure student involvement in SMD activities, the model has been developed as a co-curricular pedagogy. The extracurricular part was implemented through the Utah International Mountain Forum (UIMF), a coalition of student clubs, to encourage student interest and contributions to the UN activities which quite often extend over several semesters and therefore are difficult to be implemented through academic programs. Through the curricular part, faculty are able to contribute to the model by raising interest in SMD among students and encouraging them to become engaged with extracurricular activities on campus and in their home communities (Abdrisaev et al. 2020a, b). Clubs are important for student learning outside of the classroom, providing them opportunities to work interdependently, in groups, through mentoring experiences led by faculty (Eccles and Barber 1999; Foubert and Urbanski 2006; Logan 2008). However, adult students usually are reluctant to be involved in any extracurricular activity, including clubs, due to their busy schedules (Dill and Henley 1998).
Socially Good AI Contributions for the Implementation of Sustainable Development…
279
The UIMF, as per Wyatt (2011), allowed for adult students to join at times convenient to them in any of the coalition-partnered clubs. Joined with faculty advice, as per Timpson et al. (2014), interested students were then able to tie their individual experiences or interests with ongoing SMD activities locally and nationally. The adapted SEL model also encourages adult students, as mature and responsible individuals, to contribute towards projects based on their own experiences or interests, implement them as group leaders, and then enjoy the recognition of the FAO MP (Timpson et al. 2014). As a result, the majority of SMD projects implemented by the UIMF are initiated by students—many of whom represent local mountain communities. Due to the requirement for clubs to self-fund activities (UVU 2020), the model also encouraged students, including adult learners, to raise and contribute funds for initiated SMD projects through the UIMF or other related forums. Academic programs, and in particular general courses, until recently contributed to the developed model by allowing for faculty during classes to build ties with students—especially adult learners—and then incentivize them to join the UIMF (Abdrisaev et al. 2020a, b). Students at UVU, for example, can enroll in a three- credit course, “Globalization and SMD,” which is currently the only course related to the SMD agenda at the university and taught during the spring semester. They learn theories and practices of SMD in Utah and globally, as well as skills to match their professional experiences and allow them to become club leaders to advocate for Utah practices in SMD at the UN and other institutions. Courses like this also have the benefit of allowing faculty concerned with varied aspects of SMD to contribute to the model by developing and teaching courses, which provide the students professional training on a wider range of 2030 Agenda Goal pursuits. The impact of these courses could be better focused or made more efficient by integrating them into certificates, minors, or majors on Sustainable Development (SD) alone or in tangent with other curricula internationally but has not been seriously considered to date. Ultimately, the adapted SEL model ensures the inclusivity of student involvement within SMD activities—which is a key principle for ethically aligned AI design and socially good AI more generally as based on considerations for international human rights (Reidl 2019; Shneiderman 2020; Li 2021; Marr 2021; Jaynes 2021b, c). It also concurrently implements target 4.7, which aims to “…ensure that all learners acquire the knowledge and skills needed to promote SD, including, among others, through education for SD and sustainable lifestyles, human rights, gender equality…” (UN GA 2015). It is for this reason that we contend that an emphasis on SMD issues is important to the fulfillment of the entirety of the 2030 Agenda, as the attainment of Targets 6.6, 15.1, and 15.5 alone is not enough to display the importance of sustainable infrastructure and technology development in mountainous communities.
280
T. L. Jaynes et al.
4.2 Examples of Socially Good SMD Advocacy and IoT Use Within the Adapted SEL Model The first initiative from which UIMF started to advocate for SMD upon its founding was an observation of December 11th as the UN International Mountain Day (IMD). Since their first observation in 2010, the UIMF has observed the IMD every year. This event implements one more recommendation of the UN IYM resolution (UN GA 2003) and provides recognition from the FAO MP for its observation as a result. It has become an essential activity for the adapted SEL model as an on-campus, semester-based, UN-related activity that provides a variety of benefits based on socially good principles—especially for students and adult learners who cannot go to the UN due to time or financial constraints. Students gain via the UIMF being a part of UVU’s club network; members are also able to gain a number of other experiences with IMD observations. These include the accumulation of advocacy experiences that require extended time frames to implement (specific to the UN), developing internal and external alliances for joint activities at home and abroad, raising awareness for other IMD observations, providing a venue for FAO MP recognition to SMD contributors, and opportunities to recruit new UIMF members (Abdrisaev et al. 2020a, b). Since 2007, UVU and IUK have regularly co-hosted the “Women of the Mountains” conference (WOMC). WOMC is an international conference which serves to implement the third recommendation of the UN IYM resolution (UN GA 2003), which asked that all interested institutions support (financially or otherwise) the programs resulting from the IYM resolution. It was, and continues to be, held as a forum to follow up on the efforts resulting from the “Celebrating Mountain Women” conference hosted under the IYM umbrella in 2002 in Bhutan (Tshering 2002). The fourth WOMC was hosted independently by UIMF members educated through the SEL model under the FAO MP umbrella at the Orem UVU campus on October 7–10, 2015. More than 70 students, including those classified as non- traditional students, were involved in the preparation, invitation, and hosting of more than 120 participants for this event—including conference fundraising. These guests included diplomats, UN officials, scholars, and experts from both the USA Rocky Mountain and over 20 mountain states internationally beyond the Kyrgyz Republic. The UN highlighted the UIMF’s role in hosting this WOMC as allowing participants “…to address the critical issues faced by women and children living in mountainous regions across the globe and provide a forum to discuss gender equality” (UN GA 2019, 10). Based on experiences accumulated from IMD observations and hosting WOMC, UIMF members advocated (through the augmented SEL educational model) during various UN Economic and Social Council (ECOSOC) forums since 2016—in particular during sessions of the Commission on the Status of Women (CSW). It was an opportunity for them both to raise voices in support of women and girls from mountain communities worldwide and to report on Utah-specific experiences in building sustainable communities. Engaged UIMF members learned how building
Socially Good AI Contributions for the Implementation of Sustainable Development…
281
partnerships with non-governmental organizations registered under the UN ECOSOC—such as the Russian Academy of Natural Sciences (RANS) and the Utah-China Friendship Improvement Sharing Hands Development and Cooperation—play an important role in effective advocacy at the UN. Each year, students co-host a parallel event with RANS at the UN. These have included the CSW62–65 and the High-Level Political Forum of ECOSOC for Sustainable Development in 2018. This collaboration has resulted in the augmented SEL model being recognized in various written statements from RANS (UN ECOSOC February 2018, UN ECOSOC November 2018, UN ECOSOC 2020). The UIMF advocacy campaign has always relied on the use of a number of simple, affordable, and effective IoT tools and applications which have been contributed and developed by students. Since the launch of the student-designed and maintained website of the UIMF,4 it has played a key role in displaying the effectiveness of the augmented SEL model. The website serves as a database to consolidate all relevant information of initiatives which members of the coalition have contributed to the advocacy campaign of SMD under the umbrella of the FAO MP. This includes information on roughly 350 student activities, which include student reflective essays, copies of activity agendas, task lists, posters, brochures, media links, and other such materials. Those files are often used as templates for future activities, provide institutional memory of past UIMF activities, and ensure both continuity and smooth transition of activities between semesters and the leadership of the UIMF. This contributes to the overall goal of the augmented SEL model to provide both maximum responsibility and credit to students for the implementation of SMD activities with minimum faculty involvement. In addition, posted reflective essays serve as links to FAO MP informational media outlets and other national and international websites. Since 2011, UVU and the UIMF have been recognized 82 times (or about 10 times per year) on the FAO MP and other FAO news websites and 57 times (or about 7 times per year) in the monthly FAO MP newsletter “Peak to Peak” (FAO MP n.d.-c; Abdrisaev et al. 2020a, b). Posted student essays also provide links for official and personal social media outlets highlighting contributions of particular students to UIMF activities. As a result, the UIMF site has become a type of “e-referral” several students have been able to utilize in lieu of traditional letters of recommendation for certain jobs and internship positions (Abdrisaev et al. 2020a, b). IoT is important for facilitating regular dialogue and networking between representatives of the State of Utah with counterparts in the Kyrgyz Republic (Abdrisaev et al. 2011) and elsewhere. Twelve years of IMD observations have allowed UIMF members to combine face-to-face and online joint observations with different partners. Two UIMF members with local hosts observed IMD on December 11, 2012, in Bishkek, Kyrgyzstan, during the international conference “Climate Change and Mountains” (Abdrisaev et al. 2020a, b). They also had an online conversation with the rest of the UIMF team, which observed the IMD at UVU campus in Orem, Utah
Found at https://www.uvu.edu/utahimf/blog/index.html
4
282
T. L. Jaynes et al.
(Abdrisaev et al. 2020a, b). UIMF leaders have been invited and contributed to IMD 2018 and IMD 2019 observations hosted by a group of mountain states led by the Permanent Mission of the Kyrgyz Republic to the UN. The IMD 2021 observation was hosted as a virtual event with a joint contribution from UIMF members and students from Osh Technological University in Osh, Kyrgyzstan. It served as a preparatory step for a joint visit and presentation of the Utah-Kyrgyz student delegation at the 66th session of the CSW in March 2022. Furthermore, UIMF members successfully used IoT during the campaign organized by the FAO MP in the fall of 2015 to gather 5000 signatures among the FAO MP members to include mountain-related issues to the agenda of the UN Climate Change Conference (COP26) in Paris. Students, by using IoT, collected more than 1600 signatures both at UVU campus and from their partners at Osh Technological University, the Kyrgyz-Turkish University in Bishkek, and RANS in Moscow, Russia (Hackney 2015). Given the success of UIMF activities as influenced by the augmented SEL model, efforts should therefore consciously incorporate IoT-based tools like AI as part of their broader academic program—including in any certificate, minor, or major that focuses on SD. This will allow for new and emerging tools developed by students or industry to further SMD advocacy and retain a socially good emphasis. Again, this recommendation is being made with respect to the pace whereby AI is evolving, finding new applications, and generating new socioeconomic and sociopolitical issues that require rapid attention (Jaynes 2021a, b, c, d). Given that non-traditional and employed students can provide unique perspectives into the ways AI ought to be implemented, audited, and governed by virtue of their varied life experience, we further assert that their input would be just as invaluable to guide AI in a socially good manner that is beneficial for SMD and the 2030 Agenda more broadly.
5 Recommendations and Conclusion Though there are programs coming into being around the world that focus on AI ethics,5 there are a number of other issues pertinent to socially good AI beyond auditing for system bias and stakeholder interest determination which require ethical scrutiny. As we have argued throughout this chapter, they include the instruction of populations that may not even have IoT access at present due to the natural features that make up their home landscape or literacy in the languages used to program AI systems. Furthermore, there is the reality that SDG attainment and maintenance is not solely an environmental concern—it is every much as human a concern as the protection of those rights granted to us by local and national governmental institutions, and therefore pertinent for socially good considerations. Such as those degrees and certificates offered by Cambridge’s Leverhulme Centre for the Future of Intelligence (http://lcfi.ac.uk/master-ai-ethics/) and San Francisco State University’s Lam Family College of Business (https://cob.sfsu.edu/management/certificate/ai-ethics) 5
Socially Good AI Contributions for the Implementation of Sustainable Development…
283
As such, considerations for the sustainable implementation of AI must be prioritized as an item of curricular importance because no explicit means to incorporate AI into the SDGs is otherwise made apparent by the UN during its initial drafting of the Agenda (UN GA 2015). To this end, we offer here the suggestion that curricula internationally adapt to include majors, minors, and certificates dedicated not only to AI ethics but also to SD in the lens of high technology. While some may argue that a specialization in AI is too severe for undergraduate education, it should be remembered that not all who engage in this level of postsecondary education are traditional students. Also, there is the reality that traditional IT and IS education is increasingly incorporating AI as a result of its dependence upon the infrastructure provided by these two disciplines. And since AI systems are already being used in SD projects internationally to aid in the optimization of industries such as agriculture, finance, fishing, forestry, and mining, there is little argument that other high-technology applications may also be utilized for SD realization on a global scale. We further argue that the SEL model (whether adapted towards SD considerations or not) is an effective tool that will not lose its usefulness regardless of how higher education evolves and that it serves as a convenient system whereby socially good education can be engaged. Not only does the SEL model effectively enlist the classroom cooperation of traditional and non-traditional students, but it provides a venue for “young” and “old” alike (in body, spirit, or experience) to attach their worldviews and experiences to the material they are being taught. Furthermore, those engaged in SD advocacy can similarly utilize this productive environment to find avenues whereby they can effectively engage with SD advocates on a local (if not national or international) scale. Depending on the way in which institutions of higher learning implement the SEL model, it can also be an avenue wherein industry partners can also engage with locally educated students to secure talent and develop new generations of corporate leadership via tuition-supplemented mentorship programs or apprenticeships. Of course, our focus here has been to show the SEL model’s effectiveness for mountainous and rural populations—but that does not entail that this model is only effective for those populations. Our focus is merely the result of our concerns for how mountainous and other rural communities have unique concerns and challenges that often prevent them from being as engaged in technological adoption and development (beyond how natural landscape features are uniquely impacted by environmental challenges). Balancing the concerns of these communities is not a simple issue to address in the face of metropolitan economic disparities and mentalities that divide “developed” and “rural” areas in politics and economics. However, the same can be said for the mentalities that divide the Global North from the Global South or neo-colonialists from de-colonialists. Ultimately, socially good values impact every population of our species regardless of how we segregate ourselves—even if those values carry a different weight from one community to the next—because they are built on social mores and ethical frameworks that are in a constant state of evolution.
284
T. L. Jaynes et al.
Nevertheless, the use of inclusive learning models like SEL will be important to ensure that AI can maintain a socially good status for SDG attainment and maintenance. Beyond aiding in the achievement of Goal 4, it will aid in the achievement of related Goals (specifically 5, 8, 10, and 17). Advocating for SMD in this context has a similar effect because considerations for mountainous and island communities are sparse throughout the SDGs and mostly limited to specific Targets within the Agenda. As such, the needs of these unique landscapes are often lost in major UN forums in favor of population areas that have greater densities or “development.” After all, socially good values cannot neglect the needs of communities that depend on more central areas of commerce and social engagement. Indeed, it is this consideration for all peoples that justifies the development of notions that are globally good for society and not just the efforts of organizations like the Association of Southeast Asian Nations (ASEAN), EU, OECD, UNESCO, or independent governmental institutions. Hence, SDG attainment and maintenance will require input not only on the way the Goals should be achieved but also on the ways in which high technology (like AI) can be effectively implemented. To that end, inclusive education that engages local communities and encourages their unique input is similarly vital as high technology evolves.
Bibliography Abdrisaev, Baktybek D., Zamira S. Djusupova, and Alexey I. Semyonov. 2005. Case Study: Kyrgyzstan’s Experience in Promoting Open Source for National ICT Development. Journal of Systemics, Cybernetics and Informatics 3 (6): 19–22. http://www.iiisci.org/journal/sci/ FullText.asp?var=&id=P848491. Abdrisaev, Baktybek, R.E. Rusty Butler, Zamira Dzhusupova, and Asylbek Aidaraliev. 2011. Contributing to Sustainable Mountain Development by Facilitating Networking and Knowledge Sharing Through ICT – Collaboration Between Rocky Mountain States and Central Asia. Journal of Systemics, Cybernetics and Informatics 9 (7): 30–39. http://www.iiisci.org/journal/ sci/FullText.asp?var=&id=SP228WP. Abdrisaev, Baktybek, R.E. Rusty Butler, and Yanko Dzhukev. 2020a. Sustainable Mountain Development Advocacy Through Student Engaged Learning by Observing International Mountain Day: The Case of Utah Valley University. Mountain Research and Development 40 (4): D31–D38. https://doi.org/10.1659/MRD-JOURNAL-D-19-00070.1. Abdrisaev, Baktybek, Rusty Butler, Lida V. Ivanitskaya, and R. Ildar. 2020b. Utyamyshev. Sustainable Mountain Development Promotion Through Education. Bulletin of the Russian Academy of Natural Sciences 2020 (2): 100–105. https://raen.info/upload/redactorfiles/maket_ vestnik_2020_02.pdf. [Абдрисаев, Бактыбек, Расти Батлер, Иваницкая Л. Владимировна, и Утямышев И. Рустамович. Вестник Российской академии естественных наук 2020, № 2 (2020): 100–105.]. Bausch, Thomas, Madeleine Koch, and Alexander Veser, eds. 2014. Coping with Demographic Change in the Alpine Regions: Actions and Strategies for Spatial and Regional Development. Heidelberg, DE: Springer. https://doi.org/10.1007/978-3-642-54681-5. Baxter, Bonnie K., and Jaimi K. Butler, eds. 2020. Great Salt Lake Biology: A Terminal Lake in a Time of Change. Cham, CH: Springer Nature Switzerland. https://doi. org/10.1007/978-3-030-40352-2.
Socially Good AI Contributions for the Implementation of Sustainable Development…
285
Bissell, Therese. 2004. The Digital Divide Dilemma: Preserving Native American Culture While Increasing Access to Information Technology on Reservations. Journal of Law, Technology & Policy 2004 (1): 129–150. http://illinoisjltp.com/journal/wp-content/uploads/2013/10/bissell.pdf. Brescia, William, and Tony Daily. 2007. Economic Development and Technology-Skill Needs on American Indian Reservations. American Indian Quarterly 31 (1): 23–43. https://www.jstor. org/stable/4138893. Burch, Kurt. 2000. A Primer on Problem-Based Learning for International Relations Courses. International Studies Perspectives 1 (1): 31–44. https://www.jstor.org/stable/44218105. Bürgin, Reto, and Heike Mayer. 2020. Digital Periphery? A Community Case Study of Digitalization Efforts in Swiss Mountain Regions. In Smart Village Technology: Concepts and Developments, ed. Srikanta Patnaik, Siddhartha Sen, and Magdi S. Mahmoud, 67–98. Cham, CH: Springer Nature Switzerland. https://doi.org/10.1007/978-3-030-37794-6_4. Campbell, Wendy. 2018. The Impact of the Internet of Things (IoT) on the IT Security Infrastructure of Traditional Colleges and Universities in the State of Utah. In The Internet of People, Things and Services: Workplace Transformations, ed. Claire A. Simmers and Murugan Anandarajan, 132–153. New York: Routledge. Canevaro, Mary A. 2018. 3 Challenges of Fiber Deployment and How to Improve the Process. Alden, September 18, 2018, https://info.aldensys.com/ joint-use/3-challenges-of-fiber-deployment-and-how-to-improve-the-process Chen, Haiyan, Yaning Chen, Dalong Li, and Weihong Li. 2020. Effect of Sub-Cloud Evaporation on Precipitation in the Tianshan Mountains (Central Asia) Under the Influence of Global Warming. Hydrological Processes 34 (26): 5557–5566. https://doi.org/10.1002/hyp.13969. Chien, Nguyen D., Zhang K. Zhong, and Tran T. Giang. 2012. FDI and Economic Growth: Does WTO Accession and Law Matter Play Important Role in Attracting FDI? The Case of Viet Nam. International Business Research 5 (8): 214–227. https://doi.org/10.5539/ibr.v5n8p214. Clark, Alyssa. 2020. Creating the Virtuous Organization. Marriott Student Review 3 (3): 12. https:// scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=1204&context=marriottstudentreview. Dill, Patricia L., and Tracy B. Henley. 1998. Stressors of College: A Comparison of Traditional and Nontraditional Students. The Journal of Psychology: Interdisciplinary and Applied 132 (1): 25–32. https://doi.org/10.1080/00223989809599261. Durish, Nicolas. 2020. A Case Study in the Design and Development of a Community-Based Internet Assessment Initiative in Rigolet, Nunatsiavut, Canada. MSc thesis, University of Guelph. https://hdl.handle.net/10214/21287. Eccles, Jacquelynne S., and Bonnie L. Barber. 1999. Student Council, Volunteering, Basketball, or Marching Band: What Kind of Extracurricular Involvement Matters? Journal of Adolescent Research 14 (1): 10–43. https://doi.org/10.1177/0743558499141003. Engel-Smith, Liora. 2021. In North Carolina’s Mountains, Broadband Isn’t a Given. North Carolina Health News, July 7, 2021, https://www.northcarolinahealthnews.org/2021/07/07/ in-north-carolinas-mountains-broadband-isnt-a-given/ FAO Mountain Partnership. (n.d.-a). Members. FAO. https://www.fao.org/mountain-partnership/ members/en/ ———. (n.d.-b). Mountain Partnership: About. FAO. https://www.fao.org/mountain-partnership/ about/en/ ———. (n.d.-c). Mountain Partnership: Peak to Peak. FAO. https://www.fao.org/ mountain-partnership/peak-to-peak/current-issue/en/ Feynman, Richard P. 1988. In What Do You Care What Other People Think?: Further Adventures of a Curious Character, ed. Ralph Leighton. New York: W.W. Norton & Co. Foubert, John D., and Lauren A. Urbanski. 2006. Effects of Involvement in Clubs and Organizations on the Psychosocial Development of First-Year and Senior College Students. Journal of Student Affairs Research and Practice (NASPA Journal) 43 (1): 166–182. https:// doi.org/10.2202/1949-6605.1576.
286
T. L. Jaynes et al.
France-Presse, Agence. 2021. More Than a Third of World’s Population Have Never Used Internet, Says UN. The Guardian, November 30, 2021. https://www.theguardian.com/technology/2021/nov/30/more-than-a-third-of-worlds-population-has-never-used-the-internet-says- un?CMP=oth_b-aplnews_d-1 Gaubert, Julie. 2021. UNESCO Member Countries Adopt First Global Agreement on the Ethics of Artificial Intelligence. EuroNews, updated November 26, 2021. https://www.euronews.com/ next/2021/11/26/unesco-member-countries-adopt-first-global-agreement-on-the-ethics-of- artificial-intellige Giuliani, Dario, and Sam Ajadi. 2019. 618 Active Tech Hubs: The Backbone of Africa’s Tech Ecosystem. GSMA, July 10, 2019, https://www.gsma.com/mobilefordevelopment/ blog/618-active-tech-hubs-the-backbone-of-africas-tech-ecosystem/ Glenn, Linda MacDonald. 2012. Case Study: Ethical and Legal Issues in Human Machine Mergers (or the Cyborgs Cometh). Annals of Health Law 21 (1): 175–180. https://lawecommons.luc. edu/cgi/viewcontent.cgi?article=1024&context=annals. ———. 2018. What Is a Person? In Posthumanism: The Future of Homo sapiens, ed. Michael Bess and Diana Walsh Pasulka, 1st ed., 229–246. Farmington Hills, MI: Macmillan Reference USA. Hackney, Darian. 2015. Mitigating Climate Change Impact on Mountain Livelihoods Through Students Efforts. Utah International Mountain Forum, December 23, 2015. https://www.uvu. edu/utahimf/blog/1512climate_petition.html Hauschildt, Kristina, Christoph Gwosć, Nicolai Netz, and Shweta Mishra. 2015. Social and Economic Conditions of Student Life in Europe: EUROSTUDENT V 2012–2015 Synopsis of Indicators. Bielefeld, DE: W. Bertelsmann Verlag. Ho-Wisniewski, Evelyn. 2020. Non-Traditional Students Fall 2019. Orem, UT: Utah Valley University Institutional Research. https://www.uvu.edu/ir/docs/executive_briefings/student_ demographics/fall_2019_non-traditional_students.pdf Iqbal, Muhammad, Haji K. Khan, and Zaheer Abbas. 2021. Inequities and Challenges Faced by the Girl Students in Accessing the Information Communication Technology in the Mountainous Region of Pakistan. Research Journal of Social Sciences and Economics Review 2 (1): 488–496. https://doi.org/10.36902/rjsser-vol2-iss1-2021(488-496). Jara, Francisco, Miguel Lagos-Zúñiga, Rodrigo Fuster, Cristian Mattar, and James McPhee. 2021. Snow Processes and Climate Sensitivity in an Arid Mountain Region, Northern Chile. Atmosphere 12 (4): 520. https://doi.org/10.3390/atmos12040520. Jaynes, Tyler L. 2021a. The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls Are ‘Necessarily’ Equal When Considered as Energy). J-Multidisciplinary Scientific Journal 4 (3): 452–475. https://doi.org/10.3390/j4030035. ———. 2021b. On Human Genome Manipulation and Homo technicus: The Legal Treatment of Non-Natural Human Subjects. AI and Ethics 1 (3): 331–345. https://doi.org/10.1007/ s43681-021-00044-5. ———. 2021c. The Legal Ambiguity of Advanced Assistive Bionic Prosthetics: Where to Define the Limits of ‘Enhanced Persons’ in Medical Treatment. Clinical Ethics 16 (3): 171–182. https://doi.org/10.1177/1477750921994277. ———. 2021d. ‘I Am Not Your Robot:’ The Metaphysical Challenge of Humanity’s AIS Ownership. AI & Society. Online First 37: 1689. https://doi.org/10.1007/s00146-021-01266-1. Khan, Ashfak Ahmad, and Mehmet Somuncu. 2019. Is Rural Out Migration Constructive for Sustainable Development? A Case of Mountain Community of Kıbrıscık, Turkey. Journal of the Human and Social Science Researches 8 (4): 3337–3352. https://doi.org/10.15869/ itobiad.617221. [“Kırsal Alanlaından Dışarı Göç Sürdürülebilir Kalkınma İçin Değerli mı? Kıbrıscık Dağ Topluluğu Örneği, Türkiye.” İnsan ve Toplum Bilimleri Araştırmaları Dergisi 8, no. 4 (2019): 3337–3352.]. Kumagai, Fumie. 2020. Municipal Power and Population Decline in Japan: Goki-Shichido and Regional Variations. Singapore, SG: Springer Nature Singapore. https://doi. org/10.1007/978-981-15-4234-3.
Socially Good AI Contributions for the Implementation of Sustainable Development…
287
Leskin, Paige. 2019. The 50 Most High-Tech Cities in the World. Business Insider, updated April 2, 2019. https://www.businessinsider.com/most-innovative-cities-in-the-world-in-2018-2018-11 Li, Fei-Fei. 2021. America’s Global Leadership in Human-Centered AI Can’t Come from Industry Alone. The Hill, June 6, 2021. https://thehill.com/opinion/technology/561638-americas- global-leadership-in-human-centered-ai-cant-come-from-industry Logan, Wendy L., and Janna L. Scarborough. 2008. Connections Through Clubs: Collaboration and Coordination of a Schoolwide Program. Professional School Counseling 12 (2): 157–161. https://doi.org/10.1177/2156759X0801200212. López, Mariana. 2020. Top 10 Tech Hubs of Latin America in 2020. Contxto. https://contxto.com/ en/news/top-tech-hubs-latin-america/ Lubbock, John. 1894. The Use of Life. London: MacMillan and Co. Marr, Bernard. 2021. How Do We Use Artificial Intelligence Ethically? Forbes, September 10, 2021. https://www.forbes.com/sites/bernardmarr/2021/09/10/ how-do-we-use-artificial-intelligence-ethically/?sh=6d87460079fd Meisenzahl, Mary. 2019. The Most Incredible Perks Silicon Valley Workers Can Take Advantage of, from Free Rental Cars to Travel Stipends. Business Insider, September 15, 2019, https:// www.businessinsider.com/perks-that-silicon-valley-workers-can-take-advantage-of-2019-9 Mukhopadhyay, Kausiki, Pallab Paul, and Indeesh Mukhopadhyay. 2020. The Politics of Knowledge Economy and Sustainability of Tribal Knowledge and Health in India. International Journal of Business and Society 21 (2): 955–976. https://doi.org/10.33736/ijbs.3305.2020. O’Toole, Tom. 2021. Legacy Companies’ Biggest AI Challenge Often Isn’t What You Might Think. Forbes, September 15, 2021, https://www.forbes.com/sites/tomotoole/2021/09/15/ legacy-companies-biggest-ai-challenge-often-isnt-what-you-might-think/?sh=590300436184 Organisation for Economic Co-operation and Development [OECD].AI. 2021. OECD.AI Policy Observatory: National AI Policies & Strategies. EC/OECD. https://oecd.ai/en/dashboards Pagano, Wyatt. 2017. Where are the Women of Silicon Slopes? Marriott Student Review 1 (2): 11. https://scholarsarchive.byu.edu/marriottstudentreview/vol1/iss2/11. Pelletier, Stephen G. 2010. Success for Adult Students. Public Purpose 12: 2–6. https://www.aascu. org/uploadedFiles/AASCU/Content/Root/MediaAndPublications/PublicPurposeMagazines/ Issue/10fall_adultstudents.pdf. Price, Martin F. 2004. Introduction: Sustainable Mountain Development from Rio to Bishkek and Beyond. In Key Issues for Mountain Areas, ed. Martin F. Price, Libor F. Jansky, and Andrei A. Iastenia, 1–19. New York: United Nations University Press. Pick, James B., Avijit Sarkar, and Jeremy Johnson. 2015. United States Digital Divide: State Level Analysis of Spatial Clustering and Multivariate Determinants of ICT Utilization. Socio- Economic Planning Sciences 49: 16–32. https://doi.org/10.1016/j.seps.2014.09.001 Price, Martin F., and Thomas Kohler. 2013. Sustainable Mountain Development. In Mountain Geography: Physical and Human Dimensions, ed. Alton C. Byers, Donald A. Friend, Thomas Kohler, and Larry W. Price, 333–365. Los Angeles: University of California Press. Reidl, Mark O. 2019. Human-Centered Artificial Intelligence and Machine Learning. Human Behavior and Emerging Technologies 1 (1): 33–36. https://doi.org/10.1002/hbe2.117. Romeo, Rosalaura, Fabio Grita, Fabio Parisi, and Laura Russo, eds. 2020. Vulnerability of Mountain Peoples to Food Insecurity: Updated Data and Analysis of Drivers. Rome: FAO and UN Convention to Combat Desertification (UNCCD). https://www.fao.org/3/cb2409en/ cb2409en.pdf. Romero, Simon. 2021. Booming Utah’s Weak Link: Surging Air Pollution. The New York Times, September 7, 2021. https://www.nytimes.com/2021/09/07/us/great-salt-lake-utah-air- quality.html Rosalsky, Greg. 2021. Why Remote Work Might Not Revolutionize Where We Work. NPR: Planet Money, July 13, 2021. https://www.npr.org/sections/money/2021/07/13/1015286147/ why-remote-work-might-not-revolutionize-where-we-work
288
T. L. Jaynes et al.
Rose, Joel. 2020. Canada Wins, U.S. Loses In Global Fight For High-Tech Workers. NPR, January 27, 2020. https://www.npr.org/2020/01/27/799402801/ canada-wins-u-s-loses-in-global-fight-for-high-tech-workers Shneiderman, Ben. 2020. Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Transactions on Interactive Intelligent Systems 10 (4): 26. https://doi.org/10.1145/3419764. Silversmith, Shondiin. 2021. Indigenous Traditional Knowledge to be Included in US Efforts Against Climate Change for the First Time. Arizona Mirror, November 16, 2021. https://www. azmirror.com/2021/11/16/indigenous-traditional-knowledge-to-be-included-in-us-efforts- against-climate-change-for-first-time/ Spoon, Jeremy, Brittany Kruger, Richard Arnold, and M. Kate. 2021. Barcalow, and the Tribal Revegetation Committee (TRC). In Tribal Revegetation Project Final Project Report: 92-Acre Area, Area 5 Radioactive Waste Management Complex, Nevada National Security Site, Nevada. Portland, OR: Portland State University. https://doi.org/10.2172/1773633. Starr, S. Frederick. 2004. Conflict and Peace in Mountain Societies. In Key Issues for Mountain Areas, ed. Martin F. Price, Libor F. Jansky, and Andrei A. Iastenia, 169–180. New York: United Nations University Press. Su, Norman Makoto. 2020. Threats of the Rural: Writing and Designing with Affect. In HCI Outdoors: Theory, Design, Methods and Applications, ed. D. Scott McCrickard, Michael Jones, and Timothy L. Stelter, 51–79. Cham, CH: Springer Nature Switzerland. https://doi. org/10.1007/978-3-030-45289-6_3. Suzuki, Keisuke. 2011. Effects of Global Warming on Climate Conditions in the Japanese Alps Region. In Planet Earth 2011: Global Warming Challenges and Opportunities for Policy and Practice, ed. Elias G. Carayannis, 73–88. Rijeka, HR: InTech. Tanner, Courtney. 2021. $15 Million to the University of Utah and $25 Million to Utah Valley University Will Expand Computer Science Programs. The Salt Lake Tribune, updated November 1, 2021. https://www.sltrib.com/news/education/2021/10/31/utah-universities-arent/ Thapa, Devinder, and Øystein Sæbø. 2014. Exploring the Link between ICT and Development in the Context of Developing Countries: A Literature Review. The Electronic Journal of Information Systems in Developing Countries 64 (1): 1–15. https://doi.org/10.1002/j.16814835.2014.tb00454.x. Timpson, William M., Jeffrey M. Foley, Nathalie Kees, and Alina M. Waite, eds. 2014. 147 Practical Tips for Using Experiential Learning. Madison, WI: Atwood Publishing. Tshering, Phuntshok C. 2002. Celebrating Mountain Women: A Report on a Global Gathering in Bhutan, October 2002. Lalitpur, NP: International Centre for Integrated Mountain Development. Tuminez, Astrid S. 2020. Guest Opinion: Stringent Completion Measures Don’t Tell the Whole Story at Utah Universities. Deseret News, April 19 2020, https://www.deseret.com/opinion/2020/4/19/21224282/uvu-education-inclusivity-equality-growth-college-applications- graduation UN ECOSOC. 2018. Statement Submitted by Russian Academy of Natural Sciences, The Mountain Institute, Utah China Friendship Improvement Sharing Hands Development and Commerce, Non-Governmental Organizations in Consultative Status with the Economic and Social Council, Statement by Non-Governmental Organization E/CN.6/2019/NGO/64 (19 November 2018). https://undocs.org/E/CN.6/2019/NGO/64 ———. 2020. Statement Submitted by Russian Academy of Natural Sciences, The Mountain Institute, Utah China Friendship Improvement Sharing Hands Development and Commerce, Non-Governmental Organizations in Consultative Status with the Economic and Social Council, Statement by Non-Governmental Organization E/CN.6/2020/NGO/91 (30 November 2020). https://undocs.org/E/CN.6/2020/NGO/91 UN Educational, Scientific and Cultural Organization [UNESCO]. 2021. Report of the Social and Human Sciences Commission (SHS) to the 41st UNESCO General Conference, Annex, 41 C/37 (22 November 2021). https://unesdoc.unesco.org/ark:/48223/pf0000379920.page=14
Socially Good AI Contributions for the Implementation of Sustainable Development…
289
UN GA. 2015. Transforming Our world: The 2030 Agenda for Sustainable Development, Resolution 70/1 (21 October 2015). https://www.un.org/en/development/desa/population/ migration/generalassembly/docs/globalcompact/A_RES_70_1_E.pdf ———. 2019. Sustainable Mountain Development: Report of the Secretary-General, 74/209 (22 July 2019). https://digitallibrary.un.org/record/3825219?ln=en UN General Assembly [UN GA]. 2003. International Year of Mountains, 2002, Resolution 57/245 (30 January 2003). http://www.fao.org/fileadmin/user_upload/mountain_partnership/docs/A_ RES_57_245.pdf United Nations [UN] Economic and Social Council [ECOSOC]. 2018. Statement Submitted by Russian Academy of Natural Sciences, The Mountain Institute, Utah China Friendship Improvement Sharing Hands Development and Commerce, Non-Governmental Organizations in Consultative Status with the Economic and Social Council, Statement by Non-Governmental Organization E/CN.6/2018/NGO/37/Rev.1 (20 Feb 2018). https://undocs.org/E/CN.6/2018/ NGO/37/Rev.1 Utah State Legislature. 2020. Senate. Emerging Technology Talent Initiative. S.B. 96, 63rd Legislature, General Sess. https://le.utah.gov/~2020/bills/static/SB0096.html Utah Valley University [UVU] Institutional Research. 2020. 2020 Fact Book, Utah Valley University. Utah Valley University 2020. https://www.uvu.edu/ir/docs/info_about_uvu/fact_ books/2020_factbook.pdf Vision 2030. 2020. Utah Valley University. https://www.uvu.edu/vision2030/ Wagner, Frederic H. 2007. Global Warming Effects on Climactically-Imposed Ecological Gradients in the West. Journal of Land, Resources, & Environmental Law 27 (1): 109–121. Whitney, John. 2020. Socio-Economic Status for Students at UVU. Orem, UT: Utah Valley University Institutional Research. https://www.uvu.edu/ir/docs/executive_briefings/student_ demographics/2020_socio_economic_status.pdf. Wilkins, Emily J., Hadia Akbar, Tara C. Saley, Rachel Hager, Colten M. Elkin, Patrick Belmont, Courtney G. Flint, and Jordan W. Smith. 2021. Climate Change and Utah Ski Resorts: Impacts, Perceptions, and Adaptation Strategies. Mountain Research and Development 41 (3): R12– R23. https://doi.org/10.1659/MRD-JOURNAL-D-20-00065.1. Wyatt, Linda G. 2011. Nontraditional Student Engagement: Increasing Adult Student Success and Retention. The Journal of Continuing Higher Education 59 (1): 10–20. https://doi.org/10.108 0/07377363.2011.544977.
Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact the Sustainable Development Goals Tomás Gabriel García-Micó and Migle Laukyte
Abstract It appears to be something wrong if a person’s health is related to gender. Indeed, we might have continued to link this dependency (health-gender) to other factors—such as education or income—had it not been for the use of artificial intelligence-based systems in medicine and healthcare, which made us more aware of a broader picture of how medical research and practice has not taken male and female bodies into account equally. Nonetheless, AI has to be trustworthy, and for that purpose, it shall be lawful, ethical, and robust. But how lawful and ethical can it be if it leaves half of humanity out of the picture? Hence the focus of this chapter is to address how medical AI could positively impact the achievement of gender equality as a Sustainable Development Goal (SDG). In particular, we use several use cases to highlight how medical AI applications have made it evident that there is an enormous data gap between male and female sex involvement in clinical trials, disease treatment, and other medical therapies and that this data gap is the reason why so many AI applications are biased, limited, and inefficient. Filling this gap would mean improving and increasing data generation that would reflect particularities and specificities of female bodies and enable female representation in training algorithms. Keywords Gender · AI · Health · Empowerment · Sustainable Development Goals · Discrimination
T. G. García-Micó (*) Private Law Department, Universitat de Barcelona, Barcelona, Spain e-mail: [email protected] M. Laukyte Pompeu Fabra University, Barcelona, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_16
291
292
T. G. García-Micó and M. Laukyte
1 Introduction Mahatma Gandhi once said that health is the real wealth, yet this health-wealth still very much depends on whether a person is a man or a woman. This is a fact in developing countries but also a reality in rich and developed Western societies where healthcare services represent the national pride of social welfare systems (WHO 2016). Therefore, there appears to be something wrong if a person’s health is related to gender. Indeed, we might have continued to link this dependency (health-gender) to other factors—such as education, income, or social policies—had it not been for the technological advancements and the use of artificial intelligence-based (hereinafter, AI) systems in medicine and healthcare, which made us more aware of a broader picture of how medical research and practice has not taken male and female bodies into account equally. Research has proven that AI systems, although in certain aspects better than humans, are unlikely to completely substitute the physicians (among many, Ahuja 2019). Nonetheless, AI has to be trustworthy, and for that purpose, it shall be lawful, ethical, and robust (High-Level Expert Group on Artificial Intelligence 2019a). But how lawful and ethical can it be if it leaves half of humanity out of the picture? In fact, AI needs to receive specific input in order to learn from it before engaging in its analysis and predictions. What happens if the team in charge of providing such input is—deliberately or not—biased and provides only information about male patients? Probably, the AI system will not be as trustworthy with female patients as it will be with male patients. Hence the focus of this chapter is to address how medical AI could positively impact the achievement of gender equality as a Sustainable Development Goal (SDG). In particular, we use several use cases to highlight how medical AI applications have made it evident that there is an enormous data gap between male and female sex involvement in clinical trials, disease treatment, and other medical therapies (Liu and Dipietro Mager 2016; Dusenbery 2018; Criado Perez 2019, among many) and that this data gap is the reason why so many AI applications are biased, limited, and inefficient. Filling this gap would mean improving and increasing data generation that would reflect particularities and specificities of female bodies and enable female representation in training algorithms. The above has led us to organise the chapter as follows: In the first part, and after a short description of the state of the art of AI in medicine, we focus on the use cases that evidence lack of female health data in developing AI-based medical solutions. Then, in the second part, we explain the link between gender-balanced AI tools in medicine and SDGs. In particular, we show how more gender-balanced and inclusive AI-based medical tools could not only allow us to improve female health but also how this improvement would positively reverberate throughout other SDGs, such as those related to good health, economic growth, innovation, and reduced inequalities. We finish with concluding remarks.
Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact…
293
Finally, one more important note before starting: When we refer to gender, we refer to male-female genders. Although we are also aware that this approach is limited and does not reflect other, non-binary identities, in this chapter we will focus on the binary perspective.1
2 AI, Medicine, and Gender 2.1 State of the Art: AI in Medicine AI in healthcare is not a futuristic issue but rather a current reality. As the Academy of Medical Royal Colleges (2019, 6) stated in its 2019 report, “Artificial Intelligence has already arrived in healthcare. Few doubt though that we are only at the beginning of seeing how it will impact patient care.” In fact, it is currently used in a variety of settings, for instance, the AI-supported IDx-DR system diagnoses diabetic retinopathy (Meiliana et al. 2019), or it can also be used to diagnose stroke and autism (Petrone 2018). In any event, for the time being, AI is not taking decisions on its own but supporting physicians in decision-making processes: Complete automation of healthcare is still a very distant reality (Abbott 2020). This supportive role of AI still means a lot: AI helps physicians to diagnose patients’ diseases with high accuracy. In the EU, one among many is, for instance, the REVOLVER (Repeated Evolution of Cancer) project, developed by the Institute of Cancer Research of London and the University of Edinburgh.2 Crossing the Atlantic, we discover that regarding AI use in healthcare, the US Food and Drug Administration (FDA) provides the list of 29 AI-based medical devices3 that have been approved in the US (Benjamens et al. 2020)4 and all of them are intended to be used to support the physician in diagnosing or assessing how to treat a specific patient according to the data collected from medical tests practised upon the patient. In any case, healthcare-related applications of AI fall under the legal definition of medical device and, therefore, require a regulatory control by competent national authorities, which either approve or prohibit the commercialisation of the product as
More about the research involving these identities in medical and other domains, see Marshall et al. (2019). 2 According to Dr Andrea Sottoriva, the team leader in evolutionary genomics and modelling at ICR and the REVOLVER study leader, the machine learning technique has the ability to “identify patterns in DNA mutation within cancers and forecast future genetic changes,” and it is expected to “transform the way cancer is diagnosed, managed and treated” (Health Europa 2018). 3 Out of these 29 medical devices, 21 are used in the medical specialty of radiology (2 in cardiology, 6 in oncology, 3 in neurology, and 4 in emergency medicine, while in the others, there is no secondary medical specialty clearly stated), 1 in neurology, 1 in ophthalmology, 2 in endocrinology, 3 in cardiology, and 1 in internal medicine. 4 It is hard to envisage how AI is used in the EU as there is no public database to consult for this information. According to the MDR, there should soon be a database, Eudamed, for this purpose. 1
294
T. G. García-Micó and M. Laukyte
is the case of the US FDA,5 or will supervise the approval process while most of the job is performed by the public (as is the case with the Spanish National Centre for the Certification of Medical Devices) or private (TÜV Rheinland in Germany) entities called notified bodies.6 This is to say that there are quite a few AI-based uses in healthcare right now approved by authorities or work in progress by researchers that promise to make a positive change in healthcare. But promise does not mean delivery, so much more so if the data these AI are built on are not inclusive enough. In the next section, we briefly explain how it happens that AI are so gender ignorant.
2.2 AI in Practice: How Do AIs Work? So as to understand why AI applications in healthcare might not be as women representative as they should be, it is necessary to firstly understand how an AI-based tool works. A sad reality: Software, computers, or AI do not discriminate, but humans do. At the end of the day, AI feeds from data coming from a variety of sources such as electronic medical records (EMR), clinical trials, tests, and other kinds of data, and it is necessary—fundamental and critical—that someone make sure that complete, correct, accurate, and as representative as possible datasets are available for AI to train it on (Osoba and Welser 2017; Littman et al. 2021). We have seen cases of what happens when these datasets include more data on white men rather than on black women (Borgesius 2018, 29): In fact, AI produces erroneous and imprecise results. And it is not just healthcare: A study on machine learning-based facial analysis algorithms and datasets, particularly commercial gender classifiers, shows that the system based on such algorithms was clearly biased and that the margin of error was higher when AI had to detect a woman. Indeed, in case of woman detection, there is a rate of error of more than a 12% in comparison with men (Boulamwini and Gebru 2018). If we look at the way AI functions, we could distinguish a few phases. In the first phase, the developer of AI bestows the software with a code to allow it to process information (Bathaee 2018). Then, in the second phase, once the AI is installed, it obtains and processes data through sensors installed in the hardware (cameras,
According to the FDA, there are three classes of medical devices: class I, class II, and class III. Depending on the risk associated with the use of the device, the intended uses, the duration of the use, etc., a medical device should be classified in one or another class. Class I devices are subject to general controls (such as good manufacturing practices, labelling requirements, etc.); class II devices to special controls as determined by the FDA on a case-by-case basis and require that the manufacturer files a premarket notification with the FDA; and class III devices which need to go through the most stringent regulatory process: premarket approval. 6 According to the provisions of the Regulation (EU) 2017/745 of the European Parliament and of the Council, of 5 April 2017, on medical devices (hereinafter, the MDR). 5
Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact…
295
microphones, keyboards, websites, and thermometers, among others).7 In the third phase, the AI groups this data for the physician by discovering links and patterns, by (re)organising data, and by performing other tasks, so that he or she could have a more individual patient-focused perspective of this data. In the future, this later function could evolve to determine the best course of action (choice of treatment, drugs, therapies, etc.) that the physician should follow so as to help patients. If the AI is developed enough, it will be able to execute this course of action through its actuators (High-Level Expert Group on Artificial Intelligence 2019b). Nonetheless, the key issue after providing the AI with data is to understand how the system transforms the data that it got from the environment, represented by a variety of data, such as temperature, images, sounds or ultrasounds, written text, etc. to a uniform-coded data that the AI could process and discover patterns in. The answer is not easy as it depends on specific technology that AI is based on, which also defines its complexity: For instance, it could be machine learning techniques that include deep learning and reinforcement learning, machine reasoning, or robotics (Independent High-Level Expert Group on Artificial Intelligence 2019b). Therefore, the whole process of data processing by AI can be an enigma even for its own developers. This phenomenon is known as the black box (Bathaee 2018; Watson et al. 2019 among many): Black box algorithms are those whose functioning—that is, ways in which the algorithm moves from input (data) to output (result)—is not known. Indeed, the workings of artificial neural networks are impenetrable and remain unreadable to humans (Bathaee 2018). Black box algorithm in this sense represents a lack of transparency, and, in particular as concerns healthcare, patients expect a very high level of safety and security in the healthcare system and in (regardless whether AI-based or not) medical devices it relies on. That is why, in recent years, there has been a proliferation of scientific literature discussing how to make AI algorithms explainable and interpretable: Among many, researchers have elaborated the concept of Explainable AI (hereinafter, XAI) (Gunning et al. 2019; Barredo Arrieta et al. 2020, among many). If this aim is reached—if we manage to build AI that is as complex as it might be possible for humans to understand in terms of ways in which it reaches the decisions and predictions—people will trust such systems8 that will be open to show how they avoid It is worth highlighting that the sensors are installed in the hardware in case of an embedded software, but if we are dealing with stand-alone software, the process of obtaining information is done through non-physical sensors. An example could be an Internet browser or website which uses cookies to obtain information about the user’s search preferences to provide him or her with a more personalised experience. In this regard, generally women have less access to any kind of technologies, including but not limited to the Internet (Cirillo et al. 2020). 8 Humans should not be seen from the lens of objectiveness. We are not born to act in terms of all- or-nothing dynamics. In terms of economic rationality, this scenario is not desired as economic theories applied to human behaviour (behavioural economics) consider that humans can be nudged towards reaching a specific objective by changing the incentives at stake. A person might prefer a more fallible treatment that grants him or her a 50% chance of being cured (being the other 50% an innocuous result) than a treatment that is promised to be more effective, but with an unknown rate of error. 7
296
T. G. García-Micó and M. Laukyte
biases, meet regulatory standards and normative and policy requirements, and contribute to the development of better design of AI in healthcare (The Royal Society 2019, 9–10). Knowing how the AI processes data—how its “brain” works—is also crucial to understand which dataset has been fed into the machine, and consequently whether the outcomes are biased or not. This knowledge clearly impacts on the scope of this research: In the following part, we look at real-life examples of how AI is not meeting expectations of gender equality.
2.3 Use Cases: Where Are All the Women Gone? The practice of ignoring women in medicine is nothing new: Furthermore, it is either limited to humans because female animals have been also left out of the neurosciences and biomedical research (Beery and Zucker 2011; McGregor et al. 2016). Nor it is just the case of medicine and healthcare: From seatbelts to emojis, from movies to historical figures on the banknotes, from statues in our public spaces to school textbooks, from sports to comics, everywhere women are underrepresented, forgotten, or simply absent (Criado Perez 2019). It is true that many decisions on what data to use and what datasets to employ are based on data availability rather than on its suitability (Schwartz et al. 2021): Lack of female clinical data, caused by insufficient female involvement in medical research, is one of the main causes, and this lack of representation is the most common bias that we find introduced in the AI (Cirillo et al. 2020).9 Digital biomarkers are another powerful tool empowered by digital smart technologies that enable to collect a variety of psychological, physiological, and behavioural indicators through the human-computer interfaces or wearables, portables, implantables, or other devices (Cirillo et al. 2020). However, the data collected by these biomarkers is useless—and the algorithm using this data is biased—if these markers feed the dataset overrepresented by men (ibid). Furthermore, the lack of data in certain cases is completely unjustified because it is women and not men who are mainly suffering from certain ailments, and yet the treatments are designed, drugs tested, and procedures elaborated on men. For instance, women suffer from depression more than men because of female hormonal fluctuations of oestrogen, yet the researchers rely on male bodies to test drugs or therapies because men do not suffer from behavioural alterations related to these hormonal fluctuations (Albert 2015). There are many more examples about female-male differences in many other organs, their functioning, frequency of diseases, reactions to vaccines and drugs, sensitivity to pain, and so on and so forth (Criado Perez 2019; Cirillo et al. 2020). For instance, everyone can remember the
Other biases are historical bias, measurement bias, aggregation bias, evaluation bias, and algorithmic bias: All these biases are explained in Cirillo et al. (2020). 9
Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact…
297
main symptoms of a heart attack: In men these symptoms are mainly extreme chest pressure, difficulties to speak, and pain of the right arm. But in women, they are different and include indigestion, discomfort or pain in the higher part of the body, or shortness of breath (Shannon 2018). Let us focus on cardiology: There are five AI-based cardiology medical devices put in the market in the USA, namely, the Arterys Cardio DL, the EchoMD Automated Ejection Fraction Software, the AI-ECG Platform, the EchoGo Core, and the Eko Analysis Software (Benjamens et al. 2020). The issue with the abovementioned cardiological AI-based medical devices lies with the data provided, or in particular with the lack of it, as women and minority groups have been traditionally underrepresented in the field of cardiology (Tat et al. 2020). Furthermore, Tahhan et al. (2020) also showed that in a review of 460 acute coronary syndrome clinical trials enrolling 1,067,520 patients, women represented 26.8% and men 73.2%. Other studies (Daly et al. 2006; Liaudat et al. 2018) also show that men are two to three times more likely than women to be sent to a cardiologist when they describe feeling chest pain. Another example is AI-based computer-aided diagnosis (hereinafter CAD) systems for various thoracic diseases. Researchers used the National Institute of Health’s Chest-XRay14 dataset, including more than a hundred thousand chest X-ray images belonging to more than thirty thousand patients who were diagnosed with a myriad of different thoracic diseases. In terms of gender, the population was 56.5% male and 43.5% female. In order to perform the study, different scenarios were created for the AI-based CAD system to function: 100% male–0% female images, 75% male–25% female images, 50% male–50% female images, 25% male–75% female images, and 0% male–100% images (Larrazabal et al. 2020). The discovery was to see that when the datasets are perfectly balanced—that is, when it has 50% male–50% female images—the AI-based CAD system performs better for both genders without any relevant gender imbalances, nonetheless for some specific diseases. It would not be fair to paint all AI in healthcare and medicine as gender-biased to the detriment of women: AI has also been used to address typical female medical problems, such as detection of endometriosis (Guerriero et al. 2021), polycystic ovary syndrome (Sumathi et al. 2021), and ovarian cancer (Akazawa and Hashimoto 2020), besides many others. However, we also see that these initiatives are very recent, and although AI has been around for decades, until recently it did not address or analyse the female body, taking for granted that the human body is male.10 At this point, it is important to remember something we stated earlier in this chapter: AI is not imbalanced, nor biased, nor does it blatantly discriminate. Everything is in the hands of those who design the software’s code, of those who train the AI, and of those who compile the datasets that will feed the AI. Not with the aim of oversimplifying the complex field we are in, but research has shown that There are many initiatives that contribute in making gender equality a reality in the research settings: For a list of initiatives in AI, see UNESCO (2020); for recommendations to incorporation gender and sex in research, see McGregor et al. (2016). 10
298
T. G. García-Micó and M. Laukyte
perfectly imbalanced datasets are the panacea to avoid gender imbalances and biases in medical AI (Larrazabal et al. 2020). Here is where ethics plays an important role: Instead of focussing on providing indiscriminately large datasets without a detailed analysis of the sample and population referred in it, physicians and AI developers should be focussed on studying the gender and racial implications of the data which they will feed the AI.
3 Gender-Balanced AI for SDG In the previous section, we have briefly looked at the possibilities of AI in medicine and healthcare: To be sure, there is much more than we have space to describe. However, our goal is not to list all the applications that exist, but rather to question their trustworthiness and reliability, bearing in mind the lack of data on women to develop, train, and improve these applications. In this part, we turn to the Sustainable Development Goals (SDGs) and elaborate on how gender-inclusive and balanced AI in healthcare could contribute in achieving them: First of all, we focus on the gender equality SDG (SDG 5), and then we also argue that there are also other SDGs—such as good health and well-being (SDG 3), economic growth (SDG 8), innovation (SDG 9), and reduced inequalities (SDG 10)—that could benefit from such AI.
3.1 Gender-Balanced AI and Gender Equality as a Part of Sustainable Development: Focus on SDG 5 Making AI more representative of the female part of the population in healthcare is obviously a beneficial trend for the SDG of gender equality: Indeed, taking women into account is the essence of gender equality. We are stressing again that AI in healthcare and medicine is just the tip of the iceberg if we consider that women have generally less access to the Internet, mobile phones, and other technologies (Cirillo et al. 2020). Therefore, gender equality is built not only by making the technology companies take women into account while developing AI tools, teaching algorithms about the human body, and building datasets on female health (top-down approach) but also by making women able to participate in data collection by closing this gender-based digital divide (bottom-up approach). This is to say that AI alone cannot close the gender gap, because this gap exists not that much because of technical reasons or digital divide but because of structural and endemic forms of female downgrading that every society suffers from to a higher or lesser extent. But let us assume that we will be able to close this gap at least in terms of healthcare AI-based applications: How would it affect women? This is a hypothetical scenario, but we need to visualise just to understand what stakes are at play.
Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact…
299
The AI-based applications that we have seen above—AI applications for cardiology and CAD system for thoracic diseases—reveal the already known truth that women are not as representative as men in terms of medical research and clinical trials. If the lack of female data is a severe issue in itself, it can become even worse in an AI-based scenario: If we want a medical AI to perform—that is, to analyse, predict, reveal new patterns, or in other ways make us understand and discover more on human body—the datasets it is trained on need to be as inclusive as possible, not only of female data but also of data on ethnic minorities (see Vinuesa et al. 2020). It is of utmost importance that further research in the medical field is focussed on producing data on women. If women are incentivised to take part in clinical trials and in applied research, we will have inclusive datasets which will be more representative and, therefore, will improve the medical AI trustworthiness in its results when applied to any kind of medical condition. Doing so will set the path to reach the effective fulfilment of target 5.1 of SDG 5, which is to end all forms of discrimination against women and girls everywhere: To have and to use only male data- based datasets is a direct discrimination against women because we are not granting them access to health in the same conditions as men (Vinuesa et al. 2020), which in turn is against, among other aspects of health, the target 5.6 related to access to sexual and reproductive health and reproductive rights. In fact, there are many authors who are proposing that AIs should be programmed following the value-sensitive design (VSD), understood as a “theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process” (Friedman et al. 2013, 2): Values represented by SDGs could be a basis to articulate the needed design changes, in particular in medical AI applications (Umbrello et al. 2021). This might sound as a future action plan, but there are specific measures that could be undertaken by national authorities to make this plan a reality. For instance, when a medical AI is undergoing the conformity assessment by a notified body, it should be mandatory to prove that the datasets used to train the AI are inclusive. Only those medical AI whose developers prove this inclusivity should be allowed to commercialise their AI-based products. Furthermore, inclusivity of datasets also pose another issue: The female reluctance to participate in clinical trials is a well-known problem (among many, see Liu and Dipietro Mager 2016), yet it should not be solved by getting this data from those who might not be aware of their personal and medical data-related rights in developing countries, rather than dealing with the reasons why data is missing in the first place in developed countries. In this regard, the adequate term would be avoid “data colonialism” (Couldry and Mejias 2019, 336) that would normalise “the exploitation of human beings through data, just as historical colonialism appropriated territory and resources and ruled subjects to profit.” In this regard, data colonialism would make a step forward and exploit women from developing countries, pushing them to participate in trials and tests and thus produce data to train AI.
300
T. G. García-Micó and M. Laukyte
3.2 Impact on Other SDGs: Health, Economics, Innovation, and Inequalities—SDGs 3, 8, 9, and 10 It goes without saying that gender-balanced medical AI could also positively impact the SDG 3 dedicated to health improvements: In particular, as concerns exclusively female problems, such as maternal mortality, reproductive rights, and reproductive health, we still do not have a wide spectrum of specific female healthcare-oriented AI. Although things are slowly moving forward, for instance, Inne has developed a home fertility monitoring system that permits women to monitor their fertility on the basis of their saliva,11 we need more investments and more applications to make sure that at least some of them will contribute—directly or indirectly—to address SDG3 targets related to birth, reproduction, and newborn mortality. When it comes to other SDGs, in particular SDG 8 dedicated to guaranteeing decent work and economic growth, it is quite obvious that any technological innovation—including but not limited to AI—boosts economic growth (among many, Panth 1997). However, the challenges for the twenty-first century are not to promote economic growth at any price but to promote it in compliance with sustainability requirements. According to the research published in Harvard Business Review, sustainability has always been “an integral part of development” (Nidumolu et al. 2009), and so much so that it has to be in the data-driven, AI-enhanced environment that we are already living in. We are still grappling with the idea of sustainable AI, and there is little to no academic literature on it (van Wynsberghe 2021): This author links sustainable AI to greater ecological integrity and social justice, and no social justice is possible if AI is unbalanced in terms of gender representation. Therefore, we can argue that gender-respectful AI in medicine and healthcare could—at least partially—fall under the concept of sustainable AI and contribute to the SDG 8 that promotes “sustained, inclusive and sustainable economic growth,” as in particular specified by the targets 8.1–8.4.12 But AI that we envision in this work goes further: Gender-oriented AI would also positively impact on employment, which is another of SDG 8 objectives (in particular targets 8.5 and 8.6). In fact, building gender into AI means turning the AI developer teams gender-balanced in the first place. This could lead to higher percentage of female employees in high-tech companies and higher investment, attention, and support to gender balance in educational institutions, where computer science, software engineering, and similar subject matters are being taught and where female
More on this tool, see https://www.inne.io/en/home/ However, we do not address here the sustainability of AI development and in particular its impact on the environment that has been described in Strubell et al. (2019): The authors show the cost of training neural network models for Natural Language Processing—besides others—in terms of its impact on energy consumption and invite academic and industry stakeholders to choose environmentally friendly hardware and software. 11 12
Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact…
301
students even in the most advanced countries still represent the minority in the statistics of enrolment (Te-Ping 2020). The debate on economic growth and sustainability is inseparable from the SDG 9 which focusses on inclusive and sustainable industrialisation and fostering of innovation. For instance, target 9.5 refers to the objective of increasing numbers of research and development workers, and this increase should be brought into being by taking into account gender balance, because without a gender-balanced workforce, we will not be able to develop gender-balanced technologies, including AI-based applications in medicine and healthcare. Realisation of the SDGs 8 and 9 could positively echo on the SDG 10 that aims to reduce inequalities and in particular highlights the importance to reach income growth (target 10.1); social, economic, and political inclusion (target 10.2); and equal opportunities (target 10.3). It goes without saying that building gender- balanced technologies in any domain could positively impact on these objectives: However, in case of healthcare, this impact would be even greater because turning back to the words of Mahatma Gandhi that health is wealth, gender-balanced AI in medicine would provide women all over the world with the biggest wealth there is.
4 Conclusions and Future Research In this chapter, we have briefly looked at the promising uses that AI has been put to in the field of medicine and healthcare and have referred to real cases to support our thesis that there is a danger to perpetuate the trend to ignore female data in developing—at least some of—the AI-based applications to improve our health, treat our diseases, and, in general, understand our bodies. Some readers might contest that the gender question is no news: Indeed, gender bias has been so long—and continues to be!—an intrinsic part of our societies that getting rid of it takes time and this continuous reminding about it does not help but irritates and provokes rejection. We see the point of this critique but do not agree: We have to continue talking about how women are continuously forgotten, not taken into account, or simply ignored, so much more so if taking them into account could lead not only to a more just and humane society but also to a better future of our planet. And this is where the major contribution of this work comes into play: We have argued that taking women into account not only saves women lives and is beneficial in a variety of social, economic, cultural, and other ways but also that it could contribute in making our planet a better—safer and more sustainable—place for us and for future generations.
302
T. G. García-Micó and M. Laukyte
Bibliography Abbott, R. 2020. The Reasonable Robot: Artificial Intelligence and the Law. Cambridge: Cambridge University Press. Academy of Medical Royal Colleges. 2019. Artificial Intelligence in Healthcare. Available at https://www.aomrc.org.uk/wp-content/uploads/2019/01/Artificial_intelligence_in_healthcare_0119.pdf. Ahuja, A.S. 2019. The Impact of Artificial Intelligence in Medicine on the Future Role of the Physician. PeerJ: Life & Environment 7. Available at https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC6779111/. Akazawa, M., and K. Hashimoto. 2020. Artificial Intelligence in Ovarian Cancer Diagnosis. Anticancer Research 40 (8): 4795–4800. Albert, P.R. 2015. Why Is Depression More Prevalent in Women? Journal of Psychiatry and Neuroscience 40 (4): 219–221. Barredo Arrieta, A., N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-López, D. Molina, R. Benjamins, R. Chatila, and F. Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI. Information Fusion 58: 82–115. Bathaee, Y. 2018. The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology 31 (2): 889–938. Beery, A.K., and I. Zucker. 2011. Sex Bias in Neuroscience and Biomedical Research. Neuroscience: Faculty Publications, Smith College, Northampton, MA. Available at https:// core.ac.uk/download/pdf/28735 5536.pdf. Benjamens, S., P. Dhunnoo, and B. Meskó. 2020. The State of Artificial Intelligence-Based FDA- Approved Medical Devices and Algorithms: An Online Database. NPJ Digital Medicine 118: 1–8. Borgesius, F.Z. 2018. Discrimination, Artificial Intelligence, and Algorithmic Decision-Making. Strasbourg: Council of Europe. Boulamwini, J., and T. Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 81: 1–15. Cirillo, et al. 2020. Sex and Gender Differences and Biases in Artificial Intelligence for Biomedicine and Healthcare. NPJ Digital Medicine 3(81). Available at https://www.nature.com/articles/ s41746-020-0288-5#citeas. Couldry, N., and U.A. Mejias. 2019. Data Colonialism: Rethinking the Big Data’s Relation to the Contemporary Subject. Television and New Media 20 (4): 336–349. Criado Perez, C. 2019. Invisible Women: Data Bias in World Designed for Men. New York: Abram Press. Daly, C., F. Clemens, J.L. Lopez Sendon, L. Tavazzi, E. Boersma, N. Danchin, F. Delahaye, A. Gitt, D. Julian, D. Mulcahy, W. Ruzyllo, K. Thygesen, F. Verheugt, and K.M. Fox. 2006. Gender Differences in the Management and Clinical Outcome of Stable Angina. Circulation 113: 490–498. Dusenbery, M. 2018. Doing Harm: The Truth About How Bad Medicine and Lazy Science Leave Women Dismissed, Misdiagnosed and Sick. New York: HarperOne. Friedman, B., P.H. Kahn Jr., A. Borning, and A. Huldtgren. 2013. Value Sensitive Design and Information Systems. In Early Engagement and New Technologies: Opening Up the Laboratory, ed. N. Doorn, D. Schuurbiers, I. van de Poel, and M.E. Gorman, 55–95. Dordrecht: Springer. Guerriero, S., et al. 2021. Artificial Intelligence (AI) in the Detection of Rectosigmoid Deep Endometriosis. European Journal of Obstetrics & Gynecology and Reproductive Biology 261: 29–33. Gunning, D., M. Stefik, J. Choi, T. Miller, S. Stumpf, and G. Yang. 2019. XAI-Explainable Artificial Intelligence. Science Robotics 4: 1–2. Health Europa. 2018. Towards Personalised Medicine: Artificial Intelligence in Cancer. Interview accessible at https://www.healtheuropa.eu/artificial-intelligence-in-cancer/88685/.
Gender, Health, and AI: How Using AI to Empower Women Could Positively Impact…
303
High-Level Expert Group on Artificial Intelligence. 2019a. Ethics Guidelines for Trustworthy AI. Report accessible at https://digital-strategy.ec.europa.eu/en/library/ ethics-guidelines-trustworthy-ai. ———. 2019b. A Definition of AI: Main Capabilities and Disciplines. Report accessible at https:// ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december_1.pdf. Larrazabal, A.J., N. Nieto, V. Peterson, D.H. Milone, and E. Ferrante. 2020. Gender Imbalance in Medical Imaging Datasets Produces Biased Classifiers for Computer-Aided Diagnosis. Proceedings of the National Academy of Sciences 117 (23): 12592–12594. Liaudat, C.C., P. Vaucher, T. De Francesco, N. Jaunin-Stadler, L. Herzig, F. Verdon, B. Favrat, I. Locatelli, and C. Clair. 2018. Sex/Gender Bias in the Management of Chest Pain in Ambulatory Care. Women’s Health 14: 1–9. Littman, M.L., I. Ajunwa, G. Berger, C. Boutilier, M. Currie, F. Doshi-Velez, G. Hadfield, M.C. Horowitz, C. Isbell, H. Kitano, K. Levy, T. Lyons, M. Mitchell, J. Shah, S. Sloman, S. Vallor, and T. Walsh. 2021. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence. Available at https://ai100.stanford.edu/2021-report/ gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence. Liu, K.A., and N.A. Dipietro Mager. 2016. Women’s Involvement in Clinical Trials: Historical Perspective and Future Implications. Pharmacy Practice 14(1). Available at https://www.ncbi. nlm.nih.gov/pmc/articles/ PMC4800017/. Marshall, Z., et al. 2019. Documenting Research with Transgender, Nonbinary, and Other Gender Diverse (Trans) Individuals and Communities: Introducing the Global Trans Research Evidence Map. Transgender Health 4(1). Available at https://www.liebertpub.com/doi/full/10.1089/ trgh.2018.0020. McGregor, A.J., M Hasnain, K Sandberg, M.F Morrison, M Berlin and J Trott. 2016. How to Study the Impact of Sex and Gender in Medical Research: A Review of Resources. Biology of Sex Differences 7 (Suppl 1): 61–72. Meiliana, A., et al. 2019. Artificial Intelligence in Healthcare. The Indonesian Biomedical Journal 11 (2): 125–135. Nidumolu, R., et al. 2009. Why Sustainability Is Now the Key Driver of Innovation. Harvard Business Review, September 2009. Available at https://hbr.org/2009/09/ why-sustainability-is-now-the-key-driver-of-innovation. Osoba, O., and W. Welser IV. 2017. An Intelligence in Our Image. The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation. Available at https://www.rand.org/content/dam/ rand/pubs/research_reports/RR1700/RR1744/RAND_RR1744.pdf. Panth, S. 1997. Technological Innovation, Industrial Evolution, and Economic Growth. London/ New York: Garland Publishing. Petrone, J. 2018. FDA Approves Stroke-Detecting AI Software. Nature Biotechnology 36: 290. Schwartz, R., et al. 2021. A Proposal for Identifying and Managing Bias in Artificial Intelligence, Draft NIST Special Publication 1270. National Institute of Standards and Technology. Available at https://nvlpubs.nist.gov/nistpubs/Special Publications/NIST.SP.1270-draft.pdf. Shannon, J. 2018. Heart Attack – It’s Different for Women. Irish Heart Foundation. Available at https://irishheart.ie/news/heart-attack-its-different-for-women/. Strubell, et al. 2019. Energy and Policy Considerations for Deep Learning in NLP. Available at https://arxiv.org/pdf/1906.02243.pdf. Sumathi, M., et al. 2021. Study and Detection of PCOS Related Diseases Using CNN. IOP Conference Series: Materials Science and Engineering 1070. Available at https://iopscience. iop.org/article/10.1088/1757-899X/1070/1/012062/meta. Tahhan, A.S., M. Vaduganathan, S.J. Greene, A. Alrohaibani, M. Raad, M. Gafeer, G.C. Fonarow, P.S. Douglas, D.L. Bhatt, and J. Butler. 2020. Enrollment of Older Patients, Women, and Racial/Ethnic Minority Groups in Contemporary Acute Coronary Syndrome Clinical Trials. A Systematic Review. JAMA Cardiology 5(6): E1–E9. Tat, E., D.L. Bhatt, and M.G. Rabbat. 2020. Addressing Bias: Artificial Intelligence in Cardiovascular Medicine. The Lancet 2: e635–e636.
304
T. G. García-Micó and M. Laukyte
Te-Ping, C. 2020. Women Founders of AI Startups Take Aim at Gender Bias. Wall Street Journal, 29 September 2021. The Royal Society. 2019. Explainable AI: The Basics. Policy Briefing. Available at https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf. Umbrello, S., M. Capasso, M. Balistreri, A. Pirni, and F. Merenda. 2021. Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots. Minds and Machines 31: 395–419. UNESCO. 2020. Artificial Intelligence and Gender Equality. Report available at https://en.unesco. org/AI-and-GE-2020. Van Wynsberghe, A. 2021. Sustainable AI: AI for Sustainability and the Sustainability of AI. AI and Ethics 1: 213–218. Vinuesa, R., H. Azizpour, I. Leite, M. Balaam, V. Dignum, S. Domisch, A. Felländer, S.D. Langhans, M. Tegmark, and F.F. Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature 11: 233–242. Watson, D., J. Krutzinna, I. Bruce, C. Griffiths, I. McInnes, M. Barnes, and L. Floridi. 2019. Clinical Applications of Machine Learning Algorithms: Beyond the Black Box. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3352454. WHO. 2016. Women’s Health and Well-Being in Europe: Beyond the Mortality Advantage. Report accessible at https://www.euro.who.int/ en/health-topics/health-determinants/gender/publications/2016/ womens-health-and-well-being-in-europe-beyond-the-mortality-advantage-2016.
Smart Control of Drinking Water Grids Using IoT Jalal Dziri and Tahar Ezzedine
Abstract Drinking water distribution systems facilitate carrying potable water from water resources such as lakes, rivers, and water tanks to industrial, commercial, and residential consumers through complex pipe networks. This system may be affected by pollution or leaks. Drinking water quality monitoring is essential these days as the available water is polluted and can cause several diseases. Hence, it’s necessary to prevent any intrusion into water distribution systems and to detect pollution momentarily. In addition, it’s fundamental to detect and locate leaks that constitute a loss of water, which can cause damage to the infrastructure and can be a source of contamination. In this, we expose a detailed solution to provide water management with capabilities such as measuring, sensing, optimizing, and detecting the status of water and supporting infrastructure. First, we start with the detailed architecture of our smart system. Then, we reveal an adopted monitoring system for water quality analysis based on machine learning. Finally, we developed a distributed algorithm that detects and locates forthwith leaks in the water distribution system. Keywords Water · Quality · Monitoring · Leak detection · WSN
1 Introduction Liquid water (H2O) seems, at first glance, to be a very simple molecule, consisting of just two hydrogen atoms bonded to an oxygen atom. However, it is an essential chemical component for life. The importance of water in human life continues to grow under the considerable needs of modern civilization. In addition, in much of J. Dziri (*) · T. Ezzedine Communication System Laboratory Sys’Com, National Engineering School of Tunis, University Tunis El Manar, BP, Tunis, Tunisia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_17
305
306
J. Dziri and T. Ezzedine
the world, the quality of distributed drinking water has become a key factor in public health and economic development. At the same time, water can also be a source of disease. According to a report by the World Health Organization, five million infants and children die each year from diarrheal diseases due to contamination of food or drinking water. In developing countries, about 80% of diseases are linked to poor water supply and sanitation conditions (Agensi et al. 2019; Akinde et al. 2019). Therefore, the consumption of drinking water must be given special attention. Conventional monitoring of water quality involves the manual collection of samples from different points of the water distribution network, which are then sent to strategic laboratories for contaminant tests (Sartory and Watkins 1998; Plummer and Long 2007). In Tunisia, the water treatment station “Ghadir El Golla” uses independent portable detection probes that should be immersed in water sources to detect the various water quality parameters. Physicochemical and microbiological tests are conducted weekly in small villages and at least twice weekly in large cities. However, this traditional approach to water quality control is very inefficient because it is expensive, requires a lot of work, and does not provide real-time results. Continuous monitoring of drinking water quality can leverage wireless sensor technology. A wireless sensor network (WSN) is a self-configuring network of small sensor nodes communicating among themselves using radio signals and deployed to monitor physical or environmental conditions, such as temperature, sound, vibration, or pollutants and to cooperatively pass their data through the network to the main location or sink where the data can be observed and analyzed (Hou et al. 2018; Du et al. 2018; Li et al. 2014). These wireless systems are populated by resource-constrained nodes, have unreliable communication links, and have low data rates (Alioua et al. 2016). To address this problem, new protocols and algorithms have been specifically designed for the WSN environment. This work involves creating an intelligent system for controlling the quality of distributed drinking water. This system is based on a WSN to detect in real time such an infection in the water distribution network. This bloc also requires the control of leaks in water pipes since these can be a waste of money and pose a danger to public health (Friedman et al. 2005). Contaminants can seep into pipes where the water escapes when the pressure drops in the system. The rest of this paper is organized as follows: In Sect. 2, we will present some related works. Section 3 presents our system architecture. The fourth section is devoted to unveiling the proposed system for water quality monitoring. The system presents a new model for water quality analysis based on machine learning. Section 5 describes our leak detection algorithms in the water distribution network. In Sect. 6, we present some results evaluating our contribution. Finally, Sect. 7 specifies the conclusion and gives several perspectives.
Smart Control of Drinking Water Grids Using IoT
307
2 Related Works This section reviews the relevant works for monitoring of water distribution network using the WSN platform. Some works have focused on the monitoring of drinking water quality. Others have been interested in controlling leaks in water pipes.
2.1 Water Quality Monitoring Systems The use of wireless sensor networks for water quality control is particularly attractive due to the low cost of the sensors, the ability to acquire and process data at multiple distributed sampling points, and the possibility to communicate the data using low-power wireless communication which allows decision-makers to receive data from multiple remote sensors in real time. In recent years, assistance and research programs have been developed to improve the safety and security of drinking water systems (Pappu et al. 2017; Egri et al. 2011). In (Koditala and Pandey 2018), Koditala et al. highlighted a practical and economical solution to monitor the water quality, especially in rural areas. This solution focuses on measuring the quality of water using pH, turbidity, and temperature sensors. An IoT-based solution to monitor the water quality in real time is presented in (Shafi et al. 2018). The proposed system provides remote monitoring of water quality assessment along with water flow control via a mobile application. Four machine learning algorithms including support vector machine (SVM), k-nearest neighbor (KNN), single-layer neural network, and deep neural network have been applied for the classification of water quality. Similarly, another case of study is presented in (Chen et al. 2018) to monitor the water contamination via the implementation of SVM based on color layout descriptor (CLD) and fast Fourier transform (FFT). In the case of (Usachev et al. 2019), a system that simulates the state of water quality in the Moscow waters was proposed. This system is based on the tools for analyzing big data and machine learning. The neural network was trained, which classifies the state of the reservoir into good and deviant. Another water quality monitoring system based on a wireless sensor network is presented in (Yue and Ying 2011) using solar power. The system is constituted by a base station and several sensor nodes. The sensor nodes are powered by a solar power module, while the data connection between the node and base station is realized using WSN technology. On the node side, water quality data is collected by different sensors such as pH, oxygen density, and turbidity. Till now, despite the numerous strategies developed for water quality management, there is a lack of a specific system that can be used to assess in real time the quality of water using all physicochemical and microbiological water parameters. In addition, the machine learning classification techniques are generally applied without any data transformation in the database. In the context of Tunisia, we propose a real-time system that monitors the water quality according to physicochemical and
308
J. Dziri and T. Ezzedine
microbiological parameters. The system uses a data aggregation algorithm to improve the performance of the classification algorithms.
2.2 Leak Detection Solutions in the Water Distribution Network The loss and damage caused by the leaks required techniques and new approaches to minimize their negative impact as quickly as possible. As a result, many researchers have devoted their efforts to the development of a wide variety of techniques for detecting and locating leaks. Indeed, an observation of the literature and the work applied to leak detection makes it possible to identify two main categories of leak detection systems: static detection and dynamic detection. Although each category can identify and locate leaks, it is not uncommon to use a combination of the two categories (Romano et al. 2017). These two classes can be defined as follows: Static detection systems: these are systems that rely on sensors and data collectors which are placed in the water distribution network and which can transmit the data periodically to the network management center. This data can be used to identify and locate leaks. Dynamic detection systems are systems that rely on the mobility of leak detection devices to an area where there is a suspected leak to conduct an investigation. The main distinction between the two classes is that static detection systems can notify the water network management center of the existence of a leak almost immediately, while dynamic detection systems are required to have information on the possibility of a leak to be able for an investigation. On the other hand, dynamic detection systems can locate a leak almost immediately under ideal operating conditions, while static detection systems can locate a leak in a certain area and are also more prone to false alarms. Both classes encompass a wide variety of technologies to provide an accurate leak detection system, but the technologies are not limited to a single class. For example, acoustic technologies can be dynamic and moved from place to place periodically to detect leaks (Hunaidi and Wang 2006), or they can be embedded in the network (El-Zahab et al. 2016). Most of the existing acoustic leak detection techniques rely on external measurements of sound emitted from the turbulent jet of water escaping the pipe. In (Khulief et al. 2011), Khulief et al. present an experimental investigation that addresses the feasibility and potential of in-pipe acoustic measurements for leak detection. In the case of (Cataldo et al. 2014), three different techniques, namely, time domain reflectometry (TDR), ground-penetrating radar (GPR), and electrical resistivity tomography (ERT) were experimentally tested for water leak detection in underground pipes. A noninvasive method of pressure monitoring is designed and developed based on the force-sensitive resistor (FSR) technology (Sadeghioon et al. 2014). Novel techniques utilizing machine learning and advanced statistical methods have
Smart Control of Drinking Water Grids Using IoT
309
been recently developed for the detection and approximate location of leaks (Mounce et al. 2011; Ye and Fenner 2011; Romano et al. 2012, 2017). As the works cited do not distinguish between small and large leaks, which are two phenomena with different characteristics, we propose a new contribution which consists in creating a control system for small and large leaks simultaneously.
3 The Proposed System Architecture Our system represents an information system based on a computer platform and a wireless sensor network that covers the water distribution network. The overall system architecture is shown in Fig. 1. The water distribution system (water storage tanks, pumping station, and treatment centers) is covered by a WSN which is composed of sensor nodes placed in a hierarchical topology and base stations (sink nodes). Each base station communicates with the control center which comprises a computing platform. The IT platform has the following functionality:
Switch
LAN
Internet
Radio
Water distribution network covered by a WSN
link
Computer Platform
Router
User
Control of anomalies Leak detection Quality model Supervision Storage
Pumping station
Base station
Storage tank Water pipe
Water treatment center
Customer Storage tank
Fig. 1 The overall system architecture
310
J. Dziri and T. Ezzedine
• A data collection module: This is an interface that collects data from the sinks of WSN. • A visualization module allowing the operator to have a cartographic view of any available data in the system: Network modeling, real sensors, virtual sensors, anomalies, and leaks detection in water pipes. • A data management module: This is a mechanism for validation, persistence, subscriptions management, and data publishing. • A long-term storage module: The acquired or calculated data are stored in a database for the analysis and the calculation of various indicators. The development of this platform must consider several requirements for its industrialization: • Scalability to connect an increasing amount of data from different sources. • Flexibility for integration with other applications, especially existing information systems. • Real-time process management: The platform must be able to execute the different modules in real time. To control the physicochemical quality of the drinking water, we adopted the WSN architecture presented in Fig. 2. Referring to the technical paper (Waspmote technical guide 2017), we adopted the libelium smart water sensor as shown in Fig. 3. These nodes collect physicochemical drinking water parameters such as pH, temperature, ammonium, nitrate, potassium, turbidity, conductivity, etc. Then, the collected data will be routed to sink nodes by Zigbee links. Each sink transmits all 4G radio link Zigbee link
Base station
Control center
Libelium smart water sensor
Water treatment center or tank Pumping station
Fig. 2 WSN architecture for water quality monitoring
Smart Control of Drinking Water Grids Using IoT
311
Fig. 3 Libelium smart water sensor
Fig. 4 Flow cytometer for online monitoring of microbial cell number in water
the data to the control center using 4G radio links. We adopted a libelium sink which includes a Zigbee coordinator for communication with the sensor nodes and a 4G modem for communication with the control center. To control the microbiologic water parameters, we propose to install in each water tank and pumping station a flow cytometer bactosense (Wu 2020) as shown in Fig. 4 to detect microbial cell numbers in water. These nodes detect the microbiological parameters of the drinking water such as live dead count (LDC), total cell count (TTC), intact cell percentage (ICP), etc. The collected data will be transmitted to the control center. In addition, we are interested to detect leaks in water pipes of the distribution system. Water pipes are generally installed underground at a depth which is based on the calculation of the depth of frost penetration (e.g., between 2.5 and 3 m) (Water Pipeline Design Guidelines 2004). Thus, we propose to set up at each junction point a pair of sensors as shown in Fig. 5. The sensor shown in black is designed to detect the water pressure in the pipe, while the second in gray is designed to detect soil moisture in the vicinity of the junction point. Humidity sensors are designed to detect small leaks. Indeed, when it is a small leak, it does not have a remarkable variation of water pressure in the pipe. Water pressure sensors are designed to detect large leaks that cause a remarkable pressure variation.
312
J. Dziri and T. Ezzedine
4G link
LAN
Data base server Control center
Switch
User Internet
Pump station
Router
Ground surface 1.2 m
0.7 m 0.5 m
Water pipe
20 m
Humidity sensor Water pressure sensor Fig. 5 The overall architecture of the leaks detection and localization system based on WSN
Communication between network nodes is via Bluetooth links. The data will pass from one node to another until it reaches a base station installed in a pumping station or in a water treatment center. Base stations use 4G radio links to transmit data to the control center.
4 A New Model for Water Quality Analysis Based on Machine Learning The proposed model is based on three phases: data gathering from different sources, data aggregation, and classification using machine learning techniques. Figure 6 shows the structure of our model.
4.1 Data Gathering Data gathering for experimentation is an important task because system performance is based on data accuracy. In our work, we used the database of the water treatment station “Ghadir El Golla” of Tunis-Tunisia. The database includes real
Smart Control of Drinking Water Grids Using IoT
313
Fig. 6 Proposed model for water quality analyses
Data gathering
Data aggregation
Classification with machine learning Table 1 Average with standard error values of some physicochemical and microbiological water parameters Water quality parameters PH Temperature (°C) Free residual chlorine (mg/l) Arsenic (μg/l) Nickel (μg/l) Turbidity (NTU) Calcium (mg/l) Magnesium (mg/l) Nitrate (mg/l) Escherichia coli (CFU/100 ml) Intestinal enterococci (CFU/100 ml)
Safe range relative to the Tunisian standard 6.5–8.5 Not defined 0.2–0.6 10 70 3 200 100 45 0 0
Measured values 8.3 ± 0.56 32 ± 2.57 0.42 ± 0.35 7 ± 2.8 28 ± 3.98 2 ± 0.47 144.38 ± 13.5 20.45 ± 11.56 31.8 ± 1.25 0 ± 0.13 0 ± 0.17
Colony-forming unit (CFU) is a measure of viable bacterial cells
measurements of the physicochemical and microbiological quality of the water distribution. The database consists of 38 physicochemical and microbiological water quality parameters and 103 records for the year 2018. Table 1 illustrates the average with standard error values of some physicochemical and microbiological water parameters.
4.2 Data Aggregation Consider S = {si : i = 1…n} a set of source nodes placed in a hierarchical topology around a sink. This assembly can be housed in a water storage tank, in a pumping station, or in a treatment center. Since these resources have homogeneous characteristics, the measurements collected have almost homogeneous distributions. All nodes are synchronized. In each time interval Δk, each node can measure an amount of information vik.
314
J. Dziri and T. Ezzedine
For p parameters of drinking water quality (pH, residual chlorine, turbidity, etc.) to be measured in the time interval Δk, each sensor Si has a set of measurements Vik uikj : j 1 p . During a window whose size is defined in the control center, each node performs m measurements of drinking water quality. At the end of each window, each node will have the amount of information Vi.
u11 u1p Vi vik : k 1 m uikj : k 1 m; j 1 p u1m ump
(1)
Each node must execute locally an aggregation algorithm which allows grouping the similar lines together to have a matrix Wi with dimension (l, p) where l ≤ m and transmit it to the sink node. w11 w1p Wi wikj : k 1l; j 1 p wl1 wlp
(2)
At the sink node, the dataset of the different sources is represented by W.
W in1 wi
(3)
The sink node executes also the aggregation algorithm on the dataset W to have a matrix W′. i 1
W Wi , where q n. q
(4)
This aggregation method minimizes the energy consumption of the sources and minimizes the network load.
4.3 Classification with Machine Learning The main objective of machine learning (ML) research is to learn automatically how to recognize complex patterns and make intelligent decisions based on data. ML has a wide range of applications, namely, search engines, medical diagnosis, text and handwriting recognition, image screening, load forecasting, marketing, sales diagnosis, etc. In 1994 ML was used for the first time in Internet flow classification in the context of intrusion detection (Frank 1994). It is the starting point for several
Smart Control of Drinking Water Grids Using IoT
315
Machine Learning
Supervised Learning
Classification
Unsupervised Learning
Regression
SVM DT KNN Fig. 7 Machine learning organizational chart
works using ML techniques in Internet traffic classification. Decision trees (DT) are one of the most commonly supervised learning algorithms used in intrusion detection systems (Amor et al. 2004) due to their simplicity, high detection accuracy, and fast adaptation. Besides popular decision trees, support vector machines (SVMs) are also a good candidate for intrusion detection systems (Ambwani 2003) which can provide real-time detection capability and deal with large dimensionality of data. Also, KNN is one of the most widely used algorithms in pattern evaluation, text characterization, and cancer diagnosis. It is one of the simplest and most fundamental classification methods. In Fig. 7, a machine learning organizational chart is presented. 4.3.1 Decision Tree Algorithms The decision trees correspond to a set of algorithms that have been widely used for many years as part of supervised learning (Mitchell 1997). These algorithms, in addition to being effective in many problems, produce a decision-making process that can be easily exploited by a human. Another advantage is that each decision rule exploits only one attribute at a time. The decision tree can, therefore, use only a subset of the initial attributes and be less sensitive to the addition of irrelevant attributes. The problem is then to define a methodology allowing each stage of the construction of the tree to choose the most relevant attribute and the separation threshold realizing one of the dichotomies. The methodology differs according to
316
J. Dziri and T. Ezzedine
the quality criterion q used to identify the most discriminating attribute (entropy measurement, impurity measurement, etc.). Let N be a node in a decision tree that performs a separation of a set of examples Z (the training data) into two sets of examples Zd+ and Zd− from a threshold a and an attribute i. We note the quality variation concerning this decision expressed as follows:
N Z ,i,a q Z P xi a | Z q Z d P xi a | Z q Z d
(5)
The selection of the optimal decision rule (i*, a*) consists of choosing the one that maximizes (5). The decision tree is usually constructed by recursively applying the rivers (5) to the two subtrees produced by the preceding rule. The most disadvantages of decision tree algorithms are the high probability of overfitting, and the calculations can become complex when there are many class labels. 4.3.2 Support Vector Machine Support vector machines or SVMs are derived directly from Vapnik’s work in statistical learning theory (Vapnik 1999; Boser et al. 1992). It is a binary supervised classification method that was introduced in 1992. Subsequently, it was extended to problems of regression, density estimation, and unsupervised classification. Since 1995, research has been very prolific in the study of SVM-based methods (Vapnik 1995; Platt 1999; Joachims 2001), both in practice and theory, and many books on SVM have been published (Cristianini and Shawe-Taylor 2000; Herbrich 2001; Abe 2005). The advantage of creating a decision function with the SVM algorithm is that the solution produced corresponds to the optimum of a convex function. A disadvantage of the SVM is the significant training phase duration. In addition, SVM has another disadvantage in which the complexity of the decision function is produced when the learning base is large. 4.3.3 KNN KNN uses standard Euclidean distance (Sun et al. 2009) to measure the variation between the training and test instance. The standard Euclidean distance d(x, y) is defined as:
d xi ,x j ar xi ar x j
2
(6)
Smart Control of Drinking Water Grids Using IoT
317
5 The Proposed Leak Detection Algorithms in the Water Distribution Network 5.1 The Small Leaks Control Algorithm In our system, the transmission of moisture measurements from the sources is done periodically after a window size of 5 min. This mode of transmission is similar to the case in (Stoianov et al. 2007). Hiu, k hiu, k, j : j 1..m : represents the m moisture measurements in the upper node Ni during a window ∆k. Hil, k hil,,kj : j 1..m : represents the m moisture measurements in the lower node Ni during a window ∆k. The aggregation algorithm is then applied, which consists of grouping similar values together and eliminating the values due to measurement errors. We will have:
Hiu, k hiu, k, j : j 1.. p p m
Hil, k hil,,kj : j 1..q q m
(7) (8)
For each pair of humidity sensors:
If Hil, k Hiu, k , then it is not a leak related to the water pipeline (it may be rain or irrigation). If Hil, k Hiu, k , then it is a leak. In this case, the node Ni (the node stuck to the water pipe) transmits an alert message to neighboring nodes indicating the presence of a leak. The value of δ is defined in the control center. The warning message will pass from one node to another until arriving at the base station. The base station is responsible for transmitting alerts to the control center using 4G radio links.
5.2 The Large Leaks Control Algorithm In a window ∆k which size is defined by the control center, each node Ni performs m water pressure measurements. Pi k pij, k : j 1..m : The vector representing the water pressure measurements in the node Ni during the time interval ∆k. Pi k 1 pij, k 1 : j 1..m : The vector representing the water pressure measurements in the node Ni during the time interval ∆k+1. Each node executes the aggregation algorithm described above. We will have the following quantities of information:
Pi k pij, k : j 1.. p p m
(9)
318
J. Dziri and T. Ezzedine
Pi k 1 pij, k 1 : j 1..q q m
(10)
If d(Pi(k), Pi(k + 1)) > ε and a remarkable increase in humidity, then node Ni triggers the presence of a large leak. d represents the Euclidean distance between Pi(k) and Pi(k + 1).
6 Experimentation 6.1 Water Quality Evaluation The designed system is evaluated using MATLAB Machine Learning Toolbox based on the standard dataset from the Tunisian Treatment Station “Ghadir El Golla.” The dataset is divided into three parts, namely, the full dataset, the half dataset, and the 1/4 (quarter) dataset. Error rate (ERR) and accuracy (ACC) are the most common and intuitive measures derived from the confusion matrix (Shaer et al. 2019). Error rate (ERR) is calculated as the number of all incorrect predictions divided by the total number of the dataset. The best error rate is 0.0, whereas the worst is 1.0. ERR
FP FN PN
(11)
Accuracy, precision, and recall are used as evaluation metrics. 6.1.1 Accuracy Evaluation Accuracy (ACC) is computed as the total number of correct predictions, true positive (TP) + true negative (TN), divided by the total number of a dataset (positive (P) + negative (N)).
ACC
TP TN 1 ERR PN
(12)
Figure 8 shows in the case of the full samples, there is a slight difference between the three techniques, but for 1/4 of the samples, the linear SVM offers better accuracy. 6.1.2 Precision Evaluation Precision is computed as “the number of correct positive predictions (TP) divided by the total number of positive predictions (TP + FP).” Precision is also known as a positive predictive value.
Precision
TP TP FP
(13)
Smart Control of Drinking Water Grids Using IoT DT
SVM
319 KNN
110,00% 100,00% 90,00% 80,00% 70,00% 60,00% 50,00%
Full samples
1/2 samples
1/4 samples
Fig. 8 Accuracy of DT, SVM, and KNN (80% training and 20% testing) DT
SVM
KNN
110% 100% 90% 80% 70% 60% 50%
Full samples
1/2 samples
1/4 samples
Fig. 9 The precision of DT, SVM, and KNN (80% training and 20% testing)
Figure 9 shows that the linear SVM performs better compared with DT and KNN. For the 1/4 samples, SVM gives a better precision up to 98%. These results justify that SVM is considered in the literature as a famous classification technique for a small database. In addition, these results prove the disadvantage of SVM which is the complexity of the decision function produced when the learning base is large. As a result, we have integrated a data aggregation method at the source and at the sinks to minimize the size of the database.
320
J. Dziri and T. Ezzedine
6.1.3 Recall Evaluation The recall is the ratio of correct positive predictions to the total positive examples. Recall
TP TP FN
(14)
The recall of DT, SVM, and KNN on 80% training and 20% testing is shown in Fig. 10. On full data samples, the recall of SVM outperforms those DT and KNN, whereas the recall of SVM and DT is almost similar on 1/4 samples.
6.2 Leak Detection Evaluation The simulation tools used in our experimental work are EPANET (Rossman 2000). EPANET is a simulator designed specifically to evaluate the metrics of the distribution water (flow, pressure, quality, etc.). We used the topology presented in Fig. 11 to assess the evolution of water pressure over time and to evaluate our leak detection method in the water distribution network. 6.2.1 Evaluation of the Water Pressure Evolution in Adjacent Pipes In the first stage of the experiment, we studied the evolution of the water pressure at the adjacent nodes. Figure 12 shows the evolution of the pressure at junctions 2, 3, 4, 5, and 6 as a function of the simulation time. DT
SVM
KNN
110% 100% 90% 80% 70% 60% 50%
Full samples
1/2 samples
Fig. 10 Recall of DT, SVM, and KNN (80% training and 20% testing)
1/4 samples
Smart Control of Drinking Water Grids Using IoT
Fig. 11 Water distribution network topology
Fig. 12 Evolution of pressures in adjacent pipes
321
322
J. Dziri and T. Ezzedine pressure 25.00 50.00 75.00 100.00 psi
Junction without leak
(a) Contour plot: Pressure at 3:00 Hrs
pressure 25.00 50.00 75.00 100.00 psi
Junction with leak
(b) Contour plot: Pressure at 3:05 Hrs
Fig. 13 Water pressure evolution as a function of the simulation time. (a) Contour plot: Pressure at 3:00 h. (b) Contour plot: Pressure at 3:05 h
The curves in Fig. 12 reveal a remarkable similarity of the variation of water pressures in adjacent pipes. These variations allow us to observe that in a Δk window of size 5 min, where there are no leaks, the pressure measurements show a slight variation. This allows us to estimate the value of ε described in the previous section. 6.2.2 Leak Detection Evaluation To evaluate our leak detection system, we injected random pressure values, and then at the fifth window (25 min), we injected a low-pressure value like the case of a large leak. Figure 13 presents the evolution of water pressure, at junction nodes, as a function of the simulation time. In the first measurement windows, Fig. 13a shows stationary water pressure values during 3 h of measurement. At 3:05 h as shown in Fig. 13b, a remarkable pressure variation was detected at junction 2 of the water distribution network. This is due to a large leakage.
7 Conclusion and Perspectives This paper has been the vector of several technical advances. • We proposed a detailed study of the system architecture. Our system is based on a wireless sensor network in collaboration with a computer platform.
Smart Control of Drinking Water Grids Using IoT
323
• A new model for water quality analyses was presented. This model is based on three phases: data gathering, data aggregation, and classification with machine learning techniques. In the first stage, a database that includes real water quality measurements from the water treatment station of “Ghadir El Golla” in Tunis- Tunisia was recovered. In the second stage, considering the homogeneity of the assets in our system, we proposed a data aggregation method to minimize the quantities of information transmitted by the source nodes to the sink. This method increases the lifetime of the sources and minimizes the network load. In the third phase, we started by studying the famous classification algorithms in the literature, namely, Decision Tree, SVMs, and KNN. Then, the advantages and disadvantages of each technique were developed in detail. An evaluation of the accuracy, precision, and recall of these classification algorithms was presented. The experimentation results gave us good proof of the classification techniques’ performance. In addition, it’s found that linear SVM seems adequate for our application by applying the data aggregation method. • Leak detection algorithms in the water distribution system were developed and tested. We first reviewed a list of existing technologies designed to control leaks in water pipes. Next, we presented a detailed architecture of our system for detecting and locating leaks in a water distribution network based on a network of underground wireless sensors. We then developed an algorithm for detecting small and large leaks in distribution pipes. The experimentation results demonstrate the effectiveness of the proposed algorithm. Our work has some limits: in fact, our aggregation method regroups only similar data packets. Other coding methods can be applied to minimize the amount of the data packet transmitted by the sources. In addition, our leak detection algorithms are reactive. They cannot anticipate the leaks which can act on the water pipes. In perspective we have established several work axes: • Integrate a new algorithm to predict the quality of pipes in the water distribution network. • Propose a network coding method to minimize the amount of the data packet transmitted by the sources.
References Abe, Shigeo. 2005. Support Vector Machines for Pattern Classification. Vol. 2. London: Springer. Agensi, Alexander, et al. 2019. Contamination Potentials of Household Water Handling and Storage Practices in Kirundo Subcounty, Kisoro District, Uganda. Journal of Environmental and Public Health 2019: 1. Akinde, Sunday Babatunde, Janet Olubukola Olaitan, and Temitope Fasunloye. 2019. Water Shortages and Drinking Water Quality in Rural Southwest Nigeria: Issues and Sustainable Solutions. Pan African Journal of Life Sciences 2 (May):85–93 Alioua, Nawel, et al. 2016. USR: Uniform Stress Routing Protocol for Constrained Networks. In 2016 IEEE 5th Global Conference on Consumer Electronics. IEEE.
324
J. Dziri and T. Ezzedine
Ambwani, Tarun. 2003. Multi Class Support Vector Machine Implementation to Intrusion Detection. In Proceedings of the International Joint Conference on Neural Networks, 2003, vol. 3. IEEE. Amor, Nahla Ben, Salem Benferhat, and Zied Elouedi. 2004. Naive Bayes vs Decision Trees in Intrusion Detection Systems. In Proceedings of the 2004 ACM Symposium on Applied Computing. ACM. Boser, Bernhard E., Isabelle M. Guyon, and Vladimir N. Vapnik. 1992. A Training Algorithm for Optimal Margin Classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory. Cataldo, A., et al. 2014. Time Domain Reflectometry, Ground Penetrating Radar and Electrical Resistivity Tomography: A Comparative Analysis of Alternative Approaches for Leak Detection in Underground Pipes. NDT & E International 62: 14–28. Chen, Qi, et al. 2018. Real-Time Learning-Based Monitoring System for Water Contamination. In 2018 4th International Conference on Universal Village (UV). IEEE. Cristianini, Nello, and John Shawe-Taylor. 2000. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge: Cambridge University Press. Du, Rong, et al. 2018. The Sensable City: A Survey on the Deployment and Management for Smart City Monitoring. IEEE Communications Surveys & Tutorials 21 (2): 1533–1560. Egri, Angela, et al. 2011. Intelligent Control and Monitoring of Drinking Water Distribution System. In Annals of DAAAM & Proceedings, 629–631. El-Zahab, Samer, et al. 2016. Collective Thinking Approach for Improving Leak Detection Systems. Smart Water 2 (1): 1–10. Frank, Jeremy. 1994. Machine Learning and Intrusion Detection: Current and Future Directions. In Proceedings of the 17th National Computer Security Conference. Friedman, M., L. Radder, S. Harrison, D. Howie, M. Britton, G. Boyd, H. Wang, R. Gullick, M. LeChevallier, D. Wood, and J. Funk. 2005. Verification and Control of Pressure Transients and Intrusion in Distribution Systems [Project #2686]. ISBN: 1843398966 AwwaRF Report Series. Herbrich, Ralf. 2001. Learning Kernel Classifiers: Theory and Algorithms. MIT Press. Hou, Liqun, et al. 2018. Thermal Energy Harvesting WSNs Node for Temperature Monitoring in IIoT. IEEE Access 6: 35243–35249. Hunaidi, Osama, and Alex Wang. 2006. A New System for Locating Leaks in Urban Water Distribution Pipes. Management of Environmental Quality: An International Journal 17: 450. Joachims, Thorsten. 2001. Estimating the Generalization Performance of a SVM Efficiently. No. 2001, 20. Technical Report. Khulief, Y.A., et al. 2011. Acoustic Detection of Leaks in Water Pipelines Using Measurements Inside Pipe. Journal of Pipeline Systems Engineering and Practice 3 (2): 47–54. Koditala, Nikhil Kumar, and Purnendu Shekar Pandey. 2018. Water Quality Monitoring System Using IoT and Machine Learning. In 2018 International Conference on Research in Intelligent and Computing in Engineering (RICE). IEEE. Li, Peng, et al. 2014. Wireless Sensing and Vibration Control with Increased Redundancy and Robustness Design. IEEE Transactions on Cybernetics 44 (11): 2076–2087. Mitchell, Tom. 1997. Machine Learning, 870–877. Mounce, Stephen R., Richard B. Mounce, and Joby B. Boxall. 2011. Novelty Detection for Time Series Data Analysis in Water Distribution Systems Using Support Vector Machines. Journal of Hydroinformatics 13 (4): 672–686. Pappu, Soundarya, et al. 2017. Intelligent IoT Based Water Quality Monitoring System. International Journal of Applied Engineering Research 12 (16): 5447–5454. Platt, John. 1999. Fast Training of Support Vector Machines Using Sequential Minimal Optimization. In Advances in Kernel Methods-Support Vector Learning, 185–208. Cambridge: AJ/MIT Press. Plummer, Jeanine D., and Sharon C. Long. 2007. Monitoring Source Water for Microbial Contamination: Evaluation of Water Quality Measures. Water Research 41 (16): 3716–3728.
Smart Control of Drinking Water Grids Using IoT
325
Romano, Michele, Zoran Kapelan, and Dragan A. Savić. 2012. Automated Detection of Pipe Bursts and Other Events in Water Distribution Systems. Journal of Water Resources Planning and Management 140 (4): 457–467. Romano, Michele, Kevin Woodward, and Zoran Kapelan. 2017. Statistical Process Control Based System for Approximate Location of Pipe Bursts and Leaks in Water Distribution Systems. Procedia Engineering 186: 236–243. Rossman, L.A., 2000, EPANET 2 Users Manual, EPA/600/R-00/057, National RiskManagement Research Laboratory, U.S. Environmental Protection Agency, Cincinnati, OH. Sadeghioon, Ali, et al. 2014. SmartPipes: Smart Wireless Sensor Networks for Leak Detection in Water Pipelines. Journal of Sensor and Actuator Networks 3 (1): 64–78. Sartory, David P., and John Watkins. 1998. Conventional Culture for Water Quality Assessment: Is There a Future? Journal of Applied Microbiology 85 (S1): 225S–233S. Shaer, Lama, Rouwaida Kanj, and Rajiv Joshi. 2019. Data Imbalance Handling Approaches for Accurate Statistical Modeling and Yield Analysis of Memory Designs. In 2019 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE. Shafi, Uferah, et al. 2018. Surface Water Pollution Detection Using Internet of Things. In 2018 15th International Conference on Smart Cities: Improving Quality of Life Using ICT & IoT (HONET-ICT). IEEE. Stoianov, Ivan, et al. 2007. Pipeneta Wireless Sensor Network for Pipeline Monitoring. In Proceedings of the 6th International Conference on Information Processing in Sensor Networks. Sun, Bo, Junping Du, and Tian Gao. 2009. Study on the Improvement of K-Nearest-Neighbor Algorithm. In 2009 International Conference on Artificial Intelligence and Computational Intelligence, vol. 4. IEEE. Usachev, V.A., et al. 2019. Neural Network Using to Analyze the Results of Environmental Monitoring of Water. In 2019 Systems of Signals Generating and Processing in the Field of on Board Communications. IEEE. Vapnik, Vladimir N. 1995. The Nature of Statistical Learning Theory, New York, NY, USA: Springer-Verlag: 167–175 Vapnik, Vladimir. 1999. The Nature of Statistical Learning Theory. Springer Science & Business Media. Waspmote Technical Guide. Document Version: v7.2- 07/2017. Water Pipeline Design Guidelines. April 2004, EPB 276. Wu, Hao. 2020. Assessment of Using Low-frequency Ultrasound Device for Domestic Drinking Water Disinfection. CIE5050-09 Additional Graduation Thesis. Ye, Guoliang, and Richard Andrew Fenner. 2011. Kalman Filtering of Hydraulic Measurements for Burst Detection in Water Distribution Systems. Journal of Pipeline Systems Engineering and Practice: 2(1): 14–22. Yue, Ruan, and Tang Ying. 2011. A Water Quality Monitoring System Based on Wireless Sensor Network & Solar Power Supply. In 2011 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems. IEEE.
Algorithmic Art and Cultural Sustainability in the Museum Sector Giulia Taurino
Abstract While most Western museums contain art objects, relics and memorabilia from a variety of cultures, there is still a considerable bias in the way artifacts are defined as culturally significant, selected for exhibition, digitized, and complemented with metadata. In turn, biased datasets and non-representative samples stand at the core of an ever-growing techno-cultural issue that affects algorithmic culture, raising concerns for discriminatory practices in the application of artificial intelligence. This chapter suggests a viable path towards cultural sustainability by asking how algorithmic art can help us frame sustainable futures. It argues that promoting diversity in algorithmic design through creative practices might have a positive impact on fostering inclusive innovations in ethical AI and cultural heritage preservation. To show how AI can be positively integrated in museum institutions in coexistence with traditional curatorial practices, the first part of the paper tackles existing studies on cultural sustainability in the museum sector. More specifically, it considers a series of studies exploring theoretical and empirical approaches to sustainable development in museums. Through a literature review, I demonstrate how a sustainable cultural development was proved to be correlated to the overall sustainability framework – social, environmental, and economic. The second part complements the evidence presented by previous research with a close reading observation of new methodologies brought by the introduction of AI-based practices in museum settings. By focusing on experimental museology projects conducted in collaboration with art institutions, the chapter finally discusses the role of algorithmic art in challenging biased standards in cultural and tech industries and supporting sustainability. Keywords SDGs · Ethical AI · Digital archives · Museums · Experimental museology · Algorithmic art
G. Taurino (*) Institute for Experiential AI, Northeastern University, Boston, MA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_18
327
328
G. Taurino
1 Introduction: Cultural Datasets, Cultural Algorithms While most Western museums contain art objects, relics and memorabilia from a variety of cultures, there is still a considerable bias in the way artifacts are defined as culturally significant, selected for exhibition, digitized, and complemented with metadata. In turn, biased datasets and non-representative samples stand at the very core of an ever-growing techno-cultural issue that is spreading in algorithmic culture, with rising concerns for discriminatory practices in the application of artificial intelligence (AI). Both in the cultural sector and in the tech industry, a large body of scholarships grounded in feminist epistemology and critical race theory stressed on the need to halt the perpetuation of uneven power dynamics and advocate in favor of a more sustainable development (O’Neil 2016; Noble 2018; Buolamwini and Gebru 2018; Benjamin 2019; Costanza-Chock 2020; D’Ignazio and Klein 2020; Crawford 2021). Despite Eun Seo Jo and Timnit Gebru’s (2020) invitation to follow the lessons of archival studies in finding inclusive and transparent options for collecting sociocultural data in machine learning, most archives in libraries and museums remain contested sites where power manifests itself in the form of historical biases inherited from discriminatory cultural practices, social inequalities and institutional hierarchies. To tackle the ethical commitments that come with record-keeping, researchers, media scholars and curators have suggested counter-archival approaches that go beyond the definition of mission statements and participatory practices in archival settings, to promote a creative movement that “counteracts” partial archival histories (Kashmere 2010). As Brett Kashmere outlines, “in this formulation, the ‘counter-archive’ represents an incomplete and unstable repository, an entity to be contested and expanded through clandestine acts, a space of impermanence and play. Taken as an action, the term entails mischief and imagination, challenging the record of official history. Employed as an artistic strategy it pushes our archival impulse into new territories, encouraging critique and material alteration/fabrication, and emboldening anarchivism” (ibidem, online). If archives can offer relevant examples of compliance frameworks for gathering available sociocultural information, counter-archives provide us with “a form of recollection of that which has been silenced and buried” (Merewether 2006). Accounting for both archival and counter- archival practices, this chapter argues that the implementation of research-creation, art-based, counter-methodologies in museums can lead to positive outcomes for cultural, social, and technological sustainability. An ethical, collaborative, regulated approach to data collection, management, and use is indeed at the basis of fair, transparent, responsible AI (Leavy et al. 2021). However, the lack of consistent understanding of the operational and industrial life cycle of most commonly deployed algorithms in machine learning (ML) poses obstacles to the approval of targeted laws and guidelines. As concepts like explainability and accountability gain more and more relevance in the public debate around AI, it is still unclear how to take practical measures to overcome the barriers
Algorithmic Art and Cultural Sustainability in the Museum Sector
329
created by black-box algorithms and opaque AI models. Among other solutions, algorithmic art has been implemented in several projects hosted by cultural organizations as an educational tool that renders computational operations more accessible to non-technical audiences and prompts citizens to regaining agency over AI. Moreover, algorithmic art projects have served as gateways for further interdisciplinary exchange between the humanities and computer sciences on how to make algorithms cultural, rather than making culture algorithmic. The term “cultural algorithms” (Reynolds 1994, 2020) evokes a series of studies in evolutionary programming that were originally inspired by the theory of human cultural evolution (Maheri et al. 2021). In Robert G. Reynolds’ definition, “cultural algorithms are computational models of complex cultural systems” (Reynolds et al. 2015: 1876). Here, the term is re-introduced in the context of social sciences and humanities to broadly address conceptual and methodological frameworks equipped with the fundamentals of both cultural and computational studies. Drawing upon the notion of algorithms as culture (Seaver 2017) and cultural artifacts, this paper investigates the ways algorithmic art can help us frame sustainable cultural futures in parallel with socio-technical change. By providing an overview of computational initiatives in museum archives, the research presented here looks at creative coding as a way to improve diversity of practices and objectives in algorithmic design, while also fostering a more inclusive approach to the preservation of cultural heritages, beliefs, and traditions in museums. In order to show how artificial intelligence can be positively integrated in heritage institutions in a state of coexistence with traditional curatorial practices, the first part of the chapter will summarize existing articles on the impacts of cultural sustainability in European countries. More specifically, I will consider a series of studies that explore theoretical and empirical approaches to sustainable development in museum settings. Through a literature review, I will observe how a sustainable development in the management of historical records is correlated to the overall sustainability framework – being it social, environmental, or economic sustainability. In the second part, I will complement the evidence from previous research with a close-reading observation of the new methodologies brought by the introduction of machine learning applications in the GLAM sector. After focusing on a small corpus of projects in experimental museology conducted in collaboration with art institutions, academic labs and industrial partners starting from digitized collections datasets, I will evaluate the role of algorithmic art in challenging normative canons and standards in art histories. In addition to addressing the topic of cultural sustainability in museums, these examples will demonstrate how creative coding in algorithmic design can be used to expose the biases of the tech industry and compensate for the shortcomings of most widespread AI-based systems. Building upon a series of UN reports, guides, and policy briefs, this paper ultimately re-centers the debate about AI and sustainable development around cultural variability and creativity as the core principles for supporting technological advancements, institutional decentralization, and societal resilience.
330
G. Taurino
2 Sustainable Development and AI-Based Technologies Closely tied to geopolitical circumstances, the theme of socio-economic development has been at the center of United Nations’ concerns since the beginning of its operations, going through several adjustments, repositionings, transformations in accordance with contextual implications for minorities and vulnerable groups. A timeline published by the Dag Hammarskjöld Library (Kurtas n.d.) shows how the initial actions to promote development on a global scale have been centered around notions of technical assistance, social progress, accelerated economic growth, industrial and infrastructural advancement. By the time the UN Development Programme reached the second decade (1971–1981), the conversation had moved from a focus on primarily economic and material solutions towards a more human- centric scale that accounts for the “physical, moral, intellectual, cultural growth of the human person” (UNGA 1958). Since then, the discussion about human rights and local cultural heritage, in association with income-based development, evolved into a broader attempt to define the heterogeneous aspects that influence the improvement of both social and individual well-being, in terms of accessibility to opportunities and choices. In the years between 1990 and 1999, technology emerged as an additional, problematic element in the reflection on the human condition, not only in what concerns limited access to tertiary education, but also in terms of digital divide and gender-based social disparity. As announced in the first Human Development Report on the UN Development Programme, “while North-South gaps have narrowed in basic human survival, they continue to widen in advanced knowledge and high technology” (UNDP 1990: 3). Despite these first efforts to understand the connection between development and technologies, which led to the formation of the UN Commission on Science and Technology for Development in 1992, the discourse on how these two fields would intersect took a few more years to find a grounding for long-term, sustainable plans. At the turning of the twenty-first century, the UN General Assembly approved the Millennium Development Goals, targeting a series of values and scopes to direct humanitarian actions and policies. However, in the UN Millennium Declaration, technological development was covered only as a marginal topic, and mainly in relation to the availability of information and communication resources, leaving aside the multidimensional ways in which technological advancements and states of inequality are correlated (UNGA 2000). While the notion and practice of global development have evolved over the years, it is only more recently that the concept of sustainable development was officially introduced, along with a closer attention to the role that technology plays both at a local and international level on economic, political, social, and cultural lives across all countries. With the aim of addressing sustainability in various sectors, in the resolution adopted by the General Assembly in 2015, a new “plan of action for people, planet and prosperity” was launched as the 2030 Agenda for Sustainable Development (UNGA 2015). Not only this document outlines a new path for sustainability in association with cultural development (UNESCO 2019), but it also gives space to a conversation about the ethical aspects
Algorithmic Art and Cultural Sustainability in the Museum Sector
331
of designing and implementing new technologies for an equitable digital education aimed at respecting natural resources, women empowerment and the needs of vulnerable communities at large. The predominant narrative around technology adoption contributed to frame computational and internet-based tools within a capitalistic vision of human progress, mechanical automation and optimization. In opposition with this view and in line with the UN Sustainable Development Goals (SDGs), the field of media and technology studies started questioning the notion that advancements operate as a consequence of powerful, fast, efficient machineries. Contemporary academic discussions on the ethical and sociocultural implications of AI point at an alternative perspective on technological development based on slow, sustainable, and inclusive practices for countering biases in data collection and machine learning models. In a moment when the UN stresses on 17 goals for sustainability that align with universal and integrated approaches (UN Department of Economic and Social Affairs n.d.) to collective and multigenerational action, academic, nonprofit, and industry- level communities working in AI Ethics are exploring different possibilities to include SDGs in their research (Di Vaio et al. 2020; Astobiza et al. 2021; Gill and Germann 2021). Scholarly publications that try to assess the positive or negative impacts of artificial intelligence on the environment and society return a complex scenario, where AI may enable the accomplishment of some SDGs' targets, while inhibiting others (Khamis et al. 2019), with major concerns for transparency, safety, ethical standards (Vinuesa et al. 2020), and risks of unsustainability of socio- technical systems (Sætra 2021). As outlined in the UN Resource Guide on Artificial Intelligence (AI) Strategies, “the logic of the social media business models and AI ranking systems has had a harmful impact on the news media, further weakening press freedom and the rights to freedom of expression and access to information. Similar consequences are seen in the field of culture, where advertainments are individually tailored to the point where they may impede opening the creative horizon” (Liu et al. 2021). To regulate the outcomes of algorithmic recommendation and contrast industry-based tendencies to capitalize choice via automated prediction (Cohn 2019), the UN addressed a list of technical standards for AI policy measures and international strategies. Yet, although effective in top-down governance, these UN development programs often lack of practical directions to solicit change in algorithmic culture starting from a bottom-up approach to AI design and deployment. In search for further guidance on the path towards explainable, trustworthy, accountable machine learning systems, Floridi et al. (2020) highlight seven essential factors that can aid the implementation of artificial intelligence for social good (AI4SG) in theory and practice. In order to achieve useful results and implement successful policies, these factors should be “interpreted and evaluated contextually when one is designing, developing, and deploying a specific AI4SG project” (ivi: 1791). Considering the case of AI for cultural sustainability, in the following paragraphs, I will map out a few approaches to the design of ML models in the field of the arts and humanities, a step that will allow us to gain a better insight into algorithmic projects developed in specific museum contexts.
332
G. Taurino
Inclusive design workflows, where AI both complies and contributes to social good, have been proposed under several umbrella terms – from the older notion of human-centered design (Cooley 1987) to value-sensitive (Friedman et al. 2008) and speculative design (Dunne and Raby 2013). These critical approaches to the design of technologies offer a fertile ground to imagine a new set of machine learning applications, one that acknowledges the weaknesses within existing algorithmic systems while also enhancing ethical practices and opening a route for social change. Value-sensitive and speculative design in particular align with some of the underlying principles of sustainable development - namely, facilitation of intercultural dialogue, respect for race, ethnic and cultural diversity, promotion of social justice, human rights and gender equality (UNGA 2015). If, on the one hand, value-sensitive design refers to “a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process” (Friedman et al. 2008: 2), on the other hand, “critical designs are testimonials to what could be, but at the same time, they offer alternatives that highlight weaknesses within existing normality” (Dunne and Raby 2013: 35). In this sense, speculative methodologies can be used to actively “debate potential ethical, cultural, social, political implications” (ivi: 47). By merging these two conceptual frameworks and considering best practices in existing AI tools, methods, and research (Morley et al. 2020), I will consider experimental algorithmic art applications in the research-to-design and design-to-creation of AI technologies. In doing so, I will observe how they act in synergy with the UN sustainable development agenda. Among the case studies considered, the following sections will present projects that deal with these questions: How can we move from AI models based on unilateral technological aid and virtual assistance to models based on techno-cultural, bilateral cooperation between humans and algorithms? How can we renegotiate the meaning of AI ethics and sustainable development in relation to the arts and cultural heritage? And, finally, how can we provide a definition for sustainable AI design practices for humanistic research and the creative sector? To answer these questions, particular attention will be given to the ways in which inventive algorithmic interventions play out in the museum space, at the intersection between cultural heritage preservation, techno-humanist (Frodeman 2017) AI explorations, and sustainable development.
3 Cultural Sustainability and the Museum Space “No development is sustainable without considering culture” (UNESCO 2018: 3). Culture shapes our identity, preserves our collective memory, orientates our knowledge and behaviors. Echoing this vision, the UN most recent agenda openly recognizes the importance of diversity in cultural heritage and its crucial role in enabling sustainable development across different civilizations. So how do the UN efforts to “make cities and human settlements inclusive, safe, resilient and sustainable” and “to protect and safeguard the world’s cultural and natural heritage” (UNESCO
Algorithmic Art and Cultural Sustainability in the Museum Sector
333
2016: 18, 130) find a pragmatic outcome in more specific cultural contexts? And how can we move from a theoretical framework (Swanson and DeVereaux 2017) in policy-making to a more practical framework able to influence everyday human practices and social lives? Before considering case studies in computational arts and experimental museology, it is necessary to give a more detailed overview of what cultural sustainability means in relation to the museum sector. According to the World Commission on Culture and Development, cultural sustainability refers to the inter- and intra-generational access to cultural resources and heritage (WCCD 1995), that is to say “the entire corpus of material signs – either artistic or symbolic – handed on by the past to each culture and, therefore, to the whole of humankind” (UNESCO 1989: 57). In this sense, with their work of collecting, preserving, and displaying historical and contemporary objects, museums represent some of the main gatekeepers of cultural sustainability, always concerned with the passing on of cultural heritage. A few studies focusing on the European landscape have tried to find measures to identify and evaluate sources of cultural capital (Bourdieu 1986) in national areas as connected to institutional and historical landmarks. For instance, a study on the Swedish geo-cultural landscape uses available data as variables to verify indicators for cultural value and determine geographical patterns at a national level (Axelsson et al. 2013). This research argues for a collaborative learning process and adaptive governance that can assist stakeholders and decision-makers in acquiring a sharedompetence before targeting policies. Another study maps out the Cypriot museum network in order to isolate strengths and weaknesses in the achievement of cultural sustainability and propose a theoretical model for the definition of policies (Stylianou-Lambert et al. 2014). Stylianou-Lambert et al. notably insist on the fact that the idea of cultural heritage is artificially constructed to create a sense of place and identity at a national, local, and individual level (ivi: 2). “Museums are […] part of a cultural system which selectively renders certain aspects of a culture visible while obscuring others. Like any cultural system or economy, different stakeholders operate within various complex power structures. These stakeholders indicate what is deemed important to be preserved for future generations as the material and immaterial proof of a country’s heritage” (ibidem). In both papers, dating before the publication of the UN SDGs, the authors opt for research-based solutions to detect parameters that can be used as references for culturally sustainable policy-making. The urgency to define a common language and shared mode of communication with stakeholders (being them private donors and funders, public entities like the state, or local organizations) emerges as the basis of any viable policy-planning. This process of finding a dialogue between museums, sustainable practices, and governmental interests inevitably reveals overlapping macro- and micro-economic dynamics, as well as political interests, with a complex range of repercussions that vary depending on each context. Moreover, the cited studies emphasize that national identity and the human concern with creating a sense of place play a fundamental role in addressing cultural sustainability in the museum space. In conclusion, the combination of these economic, political, sociocultural factors profoundly affects our ability to decide for the presence or absence
334
G. Taurino
of archival records in galleries and repositories. With this in mind, Stylianou- Lambert et al. advocate for a sustainable development strategy built upon four pillars: environment, society, economy, and culture. While culture has been recognized as one of the main pillars of sustainability even outside of UN frameworks (Hawkes 2001), in turn a sustainable development has proved to be essential not only for cultural management (Mickov and Doyle n.d.), but also for addressing endogenous institutional biases of many archives and exposing systems of power rooted in colonialist histories and ideologies. Practical applications of cultural sustainability in relation to museums and libraries have pointed at the necessity of introducing sustainable development strategies as fundamental for the survival of cultural institutions (Loach et al. 2017). In these perspectives, social, environmental, and economic sustainability are thought to be prerequisites for a well-functioning cultural organization. Considering sustainable development as an imperative condition for cultural development stimulates heritage institutions to rethink their systemic hierarchies, to renovate their traditions with attention for innovation and intercultural dialogue, and to create hybrid environments where knowledge dissemination, education, and participatory initiatives can sustain community building, collaborative learning, and reparatory actions. Furthermore, consistent results across several research groups (Pencarelli et al. 2016; Pop et al. 2019) tie cultural sustainability to social sustainability. In these studies, a socially responsible behavior in museums’ management is found to have positive trade-offs on fundraising, with consequent impact on institutional initiatives that strengthen UN SDGs – namely, enhancement of heritage preservation, community-based educational programs, and research-creation activities. Showing evidence of close correlation between social practices in GLAMs and their capacity to maintain sustainability helps us understand concepts like inter-generational equity or inter-temporal distributive justice (Throsby 2002; Taylor 2013) as key to the discussion on policies and access to culture. “We need cultural policies” (Mickov and Doyle n.d.), and even more, we need culturally and socially sustainable interventions. Policy-making revolving around data collection and analysis, and sociocultural datasets still being sites of controversies and practices of appropriation, the cultural context is sometimes still underplayed in favor of factors that can be more easily measured and quantified. Traditional methodological approaches are now being questioned among a variety of sectors that are highlighting the centrality of cultural awareness for the development of sustainable and equitable regulations (Napier et al. 2017). Through a series of articles and policy briefs, the UNESCO is soliciting cutting- edge emergency responses for societal recovery, renewal, transformation, and short- term to mid-to-long-term plans in response to the disruptions of the ongoing pandemic. Redeeming creativity as vital for structural change as much as for the survival of cultural industries, UN publications are calling for social action to reverse the worsening inequality brought by the post-pandemic crisis. In Culture in Crisis: Policy Guide for a Resilient Creative Sector (UNESCO 2020), the UN commits to the protection of human creativity, openly addressing it as a form of resilience. While acknowledging the challenges and risks of digital technologies, this
Algorithmic Art and Cultural Sustainability in the Museum Sector
335
policy guide recognizes that, although problematic, they have been essential in maintaining social connections and cultural consumption, providing interesting opportunities to rebuild, share, and advance social interaction outside of physical spaces. “Beyond the cultural sector itself, culture has the power to advance other human development objectives such as education, health and well-being, while also stimulating the much-needed skills and values of adaptation, solidarity and empathy, all of which will be vital to build back better societies” (UNESCO 2021). When it comes to defining sustainability in close relation to culture (Zheng et al. 2021), notions like collective memory, historical trauma, technological disparity are needed in order to understand the complexity of sustainable development initiatives. Often missing at the level of governance, acts of care, rupture and repair turn out to be necessary for human, cultural, technological survival alike. Opening museums up to a range of creative and critical possibilities might help us grasp the variety of cultural dynamics articulated within museums, from those regulating human geographies and senses of belonging to those underlying human identities and archival acts of acquisition, selection, preservation, up until the historical movements involved in the definition of myths, symbols, and processes of erasure. Thinking about the museum as a space for survival, the following paragraph presents a counterapproach that is radically different from methodologies based on economic analysis to calculate cultural impacts. Starting from digital databases that document physical archives, I will discuss case studies that deploy algorithmic art and other exploratory approaches to explore alternative ways to “measure” cultural invisibility, marginal identities, and intercultural and inter-generational connections. I will observe the ways in which this methodology responds to sustainable development on multiple levels, by both contributing to reaching museums’ core sustainability missions and advocating for ethical AI practices.
4 An Algorithmic Art Framework for Techno-cultural Sustainability In this study, I select examples of best practices in research-creation (Chapman and Sawchuk 2012) designed to explore digitized collections and metadata repositories, with consideration for contested heritage and contentious art histories in Western museums across Europe, the UK and the US. While some of these practices were born as part of a digital humanities movement that uses creative coding and computational methods to undertake questions in humanities and social sciences (Maeda 2004), they are also insightful interdisciplinary accounts of how a constructive dialogue can be built in connection between several fields: ethics and sustainability studies, cultural and media studies, sociology, computer sciences, design, and experimental museology. Far from being merely provocative statements, these cooperative AI (Dafoe et al. 2021) practices feed into a culture of sustainability; they are interactive, exploratory public art projects conceived to find collective solutions to
336
G. Taurino
simultaneously respond to the shortcomings of AI development in for-profit industries and the institutional limitations of the GLAM sector. The multifaceted relation between research and creation, including, among others, creation-as-research, has been theorized “as a form of cultural analysis” that “partakes of the spectacle of the work of art and its demonstration of alternative frameworks for understanding, communicating, and disseminating knowledge. This is also what defines research- creation as an epistemological intervention on the level of academic methodology. But each and every research-creation project also carries the possibility of acting as an intervention in its own right in terms of the specific fields of inquiry, practice, history, et cetera in which it is embedded” (Chapman and Sawchuk 2012: 23). The case studies presented here are therefore to be observed as epistemological interventions that aim at initiating a process of diversification in existing AI, curatorial, exhibition practices. Bearing in mind both forms of brokenness and reconnection as ways to repair faults and gaps in equity and justice, I selected a corpus of research projects – academic or archive-based – that combine computational techniques with archival datasets. The goal is to explore how inter-generational equity and inter-temporal distributive justice can be applied to sociocultural scenarios, and more specifically to the museum ecosystem through algorithmic art applications. Since all projects in the corpus were produced in a timeframe between 2015 to 2022, and some of them are still ongoing, I will not concentrate on the empirical assessment of their impact, but rather on a study of their modus operandi, topics addressed, and their alignment with UN SDGs. In order to do so, I classified these projects in two groups: one focusing on the transformative reuse of existing collections and datasets through value-sensitive design and one focusing on the imaginative use of non-existing collections and absent datasets (Klein 2013) through speculative design. In the former, which counts a longer tradition of computational interventions,1 I include the following: Recognition (2016), Forms of Attraction (2018), MosAIc (2020), This Recommendation System is Broken (2020–), and Museums Marginalia (2021–2022). In the latter, I present two groundbreaking deep learning-based generative projects: Igùn (2020–) and Deep Fakes: Art and Its Double (2021–2022). Awarded the IK Prize 2016 for digital innovation, Recognition is a project that uses machine learning to explore connections between art history and contemporary photojournalism images. It showcased over the course of 3 months (from September to November 2016) at the Tate gallery in London (U.K.), creating a virtual exhibition of 7271 pairings between art objects from the museum’s collection and news photos by Reuters. This project uses image recognition to detect similar objects, faces, compositions, and contexts across visual records. Researchers described this initiative as an attempt to find creativity and meaning in errors (Miguel Carvalhais), coexistence of invisibility and colors (Natalie D Kane), connections between destruction and transformation (Anne Racine), and superpositions of design and
For a more comprehensive listing of AI initiatives in museums, please refer to the following resource: https://www.artsmetrics.com/en/list-of-artificial-intelligence-ai-initiatives-in-museums/ 1
Algorithmic Art and Cultural Sustainability in the Museum Sector
337
violence (Caroline Sinders).2 Matching computer vision with the human eye of artists and photo-reporters, Recognition provides a case study for AI implementations in museums that bridge separate fields (i.e. arts and journalism) and induce unexpected cross-historical, cross-cultural, and cross-geographical conversations. Other projects leverage computational techniques to investigate intra- or interinstitutional connections between artworks in archives. One example is the case of Emily Chu’s visualization of the Metropolitan Museum of Art Costume Institute’s database, Forms of Attraction (2018). Winner of the Kantar Information is Beautiful Awards 2018, this work uses machine learning and statistical models to retrace the history of clothing by applying K-means clustering algorithms on images to identify similar shapes. The result is a visual project that shows how the evolution of fashion alternates between craftmanship and artistic expression. Relying on unsupervised learning, this visualization entails a reflection on dynamics of constructed inequality and classism in the arts, but it also points at possible failure of algorithms and training sets used in AI. On the one hand, the project proves the efficacy of machine learning models on most common shapes of clothing, connecting imges from a variety of timeframes and geographical areas.Such is the case of the dancing bell shape, which emerges as a recurrent pattern across historical periods and cultures. These recurring patterns can help us reconstruct and compare women’s role across societies and civilizations, thus filling the void of untold and unknown women histories. However, this same approach to data visualization based on algorithmic clustering can be used to expose potential issues with existing unsupervised learning models - e.g. missing more unusual shapes, mis-grouping outliers and returning errors. Not only these projects operate effectively in intra-institutional settings to redefine archival narratives and histories in museums like the Tate and Met, but they can also be instrumental in favoring inter-institutional collaboration and dialogue. It is the case of the more recent art project MosAIc (2020) developed at MIT CSAIL by Mark Hamilton, in partnership with Microsoft. The research group started from digitized collections at the New York’s Met and the Amsterdam’s Rijksmuseum. MosAIc leverages on a supervised model (tree-based k-nearest neighbors algorithm or KNN) to create an application based on conditional image retrieval that “combines visual similarity search with user supplied filters or ‘conditions’” (Hamilton et al. 2021: 1). In a creative way, this work proposes a novel methodology for improving machine learning to better identify analogies between artworks from different collections, cultures, and media. In addition to suggesting inter-generational, inter-cultural, and inter-media connections, this prototype can be used to solve some of the limitations of state-of-the-art machine learning algorithms and advance more diverse AI applications in synergy between multiple archives. Much like MosAIc, other algorithmic art initiatives were designed to address the challenges in the way cultural archives in museums are traditionally organized, preserved, and exhibited, while also contributing to the creation of ethical algorithms. For instance, a project
http://recognition.tate.org.uk/#intro
2
338
G. Taurino
launched in 2020 at MetaLAB (at) Harvard with the title of This Recommendation System is Broken,3 in collaboration with the Harvard Art Museums for the exhibition series Curatorial A(i)gents, was designed to problematize automated decision- making in application to the curation of cultural content. Built using a creative coding approach, the work was later adapted to other institutional environments to explore more closely how machine learning and algorithmic curatorial practices can help redefine art histories across institutions in the same urban region (Taurino 2021). Titled Museum Marginalia (2021–2022), the second project in this broken- algorithm series was developed as part of a collaboration between Fondazione ISI (tech partner), Associazione Arteco (cultural partner), and local Italian museums in the Torino area (museum partner). Open datasets made available by Fondazione Torino Musei (i.e., Galleria Civica d’Arte Moderna e Contemporanea, Museo d’Arte Orientale, Palazzo Madama) were used to create a content filtering system that selects randomly assorted groups of objects based on materials and techniques commonly associated with low arts, crafts and handwork. This algorithm shows random links between artistic traditions and crafting techniques, with a focus on objects for everyday use. In a second iteration, the datasets were used to train a machine learning model able to recognize connections between records sharing visual features and create a network of items paired by similarity. This project connects heterogeneous archives coming with a wide variety of digitized records, from modern and contemporary art to ancient Italian art and Eastern art, with attention for the inter-cultural and inter-generational histories hidden behind marginal objects in museums’ collections. Furthermore, it offers a solution for designing algorithms in a way that is ethical (e.g., gives visibility to under- researched artworks), educational (e.g., explores a history of women crafts), and collaborative (e.g., promotes interinstitutional communication in synergy between algorithmic and human curators). These are examples of how the transformative reuse of existing datasets can tackle some of the goals of UN Sustainable Development Agenda while also proposing a change at the community level, by enhancing better understanding of archives both internally and externally, by presenting a diversified set of adaptive algorithms, and by favoring social knowledge, dialogue, and tolerance. Overall, each project contributes in framing a design and use of AI that favor cultural exchange. This stands in contrast with most common applications found in the tech industry, which often deploys algorithmic filtering systems that reinforce the creation of silos by selecting “what is always already preferred” (Taurino 2020). Other projects bring the exploratory approach to the fore by using AI to imagine missing data and potential histories (Azoulay 2019). Among the most interesting experiments in this domain, Minne Atairu’s Igùn uses generative adversarial networks (GANs) to inventively retell a never-existed history of Benin’s art production that was silenced by a 17-year British interregnum. As a training set for the machine learning algorithm, the artist used a dataset of stolen Benin Bronzes
https://thedigitalreview.com/issue01/taurino-machine/work/brokensystem/exhibit.html
3
Algorithmic Art and Cultural Sustainability in the Museum Sector
339
curated by the Western Art Museums.4 Ever since its launch, the Atairu’s project evolved into a series of collaborations with museums and galleries that aim at raising awareness on the long-term traumatic impacts of colonialist interventions on local identities and cultural heritage. Imaginative projects revolving around the transformational reuse of datasets via generative AI models expose the need for decolonial approaches in the management of archival records as well as in algorithmic practices. In these works, techno-cultural diversity assumes a pivotal role in taking concrete measures in applying AI for social good and justice. Within a similar exploratory intent, the EPFL Pavilion’s exhibition, Deep Fakes: Art and Its Double (2021-2022), born from the collaboration between several academic and tech partners, uses advanced computational techniques to generate digital replicas of seminal artifacts from pan-Asian art and architecture. With this speculative design approach, it raises questions around crucial notions in museology and arts, such as the value of materiality, authority, and authenticity of objects. At the same time, it explores the effects of AI technologies on cultural heritage, thus mobilizing several topics in a public conversation about the deep fakes in acts of misinformation. Even outside of the museum space, algorithmic art has emerged as a form of activism against discriminatory machine learning models. In collaborative annotation projects like Algorithmic Art to Counter Gender Bias 2022, data-based art practices have been deployed to fill the gender gap and create more inclusive training sets for machine learning. This web-based project invites users to take part into a participatory labeling process that aims at redefining conventional and culturally constructed concepts, like femininity or womanhood. The resulting dataset of annotated words is then used to retrain machine learning models that produce algorithmic- generated images from strings of text that include one of the following terms: woman, beauty, and imperfection. This research project acknowledges the social – and now also technological – issues of wording and understanding concepts that emerge around the broader notion of woman. The terminology adopted is meant to be used as a cue, and not as a suggestion or a forced path. Cue as retrieval cue, as a prompt that helps activate a process for remembering, rewriting, and reimagining cultural memories, traumas, kinships, and disconnections around what woman, beauty, and imperfection mean. Overall, inside or outside of museums’ spaces, these projects combine critical design with artistic interventions, to foster a deeper reflection on which algorithmic practices we should adopt, how and why. Algorithmic art has been implemented in world-renowned initiatives, organizations, and venues to tackle the wonders of computational technologies and digital transformations, as much as the risks and harms of incomplete or non-representative data in machine learning and human- induced AI biases. While the case studies provided here do not offer empirical proofs of their efficacy in promoting social and cultural sustainability, they still are examples of sustainable applications for techno-cultural development in
https://igun.minneatairu.com/about/
4
340
G. Taurino
institutional contexts where a solely top-down policy-based approach fails. Each project suggests the possibility of adopting a hybrid methodology – that is, both human and data oriented – in order to identify the presence of cultural and digital barriers in institutions, as well as to open up museums for cross-disciplinary and cross-collection research. Overall, algorithmic art approaches prove that coding and computational methods can be use to reconsider the sustainability of cultural heritage preservation, questioning archival processes that are defined by conflicts, violence, historical appropriations, or else by the impossibility to secure a future. Similarly, they can also help reframe a discourse on ethical AI between museums, artists, and the tech sector, to collectively and publicly ask whose heritage (Hall 1999) cultural collections represent and whose revolutions are the Internet, AI, and other technological inventions (King 2003).
5 Conclusion The chapter lists a series of research-creation approaches to techno-cultural sustainability as evidence of the emerging field of algorithmic art for Sustainable Development Goals (Goddard 2022). Creative uses of algorithms appear helpful to reaffirm sustainable development in GLAM institutions in a collaborative, educational, inclusive way, and in parallel with top-down economic, juridical, political strategies. While the empirical studies cited in the first part of the paper are fundamental in defining a controlled and measurable setting for policies, as Edgar Morin (2009) argues, there is something deeper to human development that is yet to be found in economic and political programs. “The calculation of all aspects of human life obscures what cannot be measured: […] things that are important in our lives but seem extra-social, purely personal. All the solutions considered are quantitative ones: economic growth, GDP growth” (Morin 2009, my translation). In contrast, “a policy integrating ecology in the whole of the human problem would face the issues posed by the negative effects […] of the developments of our civilization, including the degradation of solidarities, which would make us understand that the establishment of new solidarities is a capital aspect of a policy of civilization” (ibidem). With respects to solidarity, diversity and inclusion, the paper argues that both disruptive (e.g., projects that challenge normative assumptions in museum contexts) and reparative (e.g., projects that fill the gaps left from archival losses in marginalized art histories) approaches in algorithmic art can lead to higher community engagement with cultural and technical ecosystems than more traditional approaches in AI governance. Accounting for the unmeasurable, value-sensitive and speculative design projects advocate for a techno-humanist attitude towards policy- making – that is to say “a new kind of critical approach that focuses humanistic aims through technology. If the humanities have a future in the current scene shaped increasingly by a powerful techno-scientific fusion of information technology, biotechnology, and nanotech, we will have to reorient our compass and rethink our
Algorithmic Art and Cultural Sustainability in the Museum Sector
341
methods” (Lenoir in Riskin 2007: 209). As Donna Haraway and others (Haraway 2016; Hayles 2017; Braidotti 2019) have claimed, we need to seek and embrace a critical view that can lead to an ecological collaboration among nature, humans, and intelligent machines. A human-algorithmic version of Haraway’s concept of kinship (2016) or else of Braidotti’s ethical bond (2019) can be built between human and artificial intelligence through art, against the risks posed by Unitarian subjectivities, isolationist policies, and the techno-individualist impositions of AI-based ranking systems. In an ever-changing ethics of becoming (Braidotti 2006) and distributed cognition (Hayles 2017), projects based on speculative experiments, imagination, and invention contribute in constructing “micro-political modes of daily activism” (ibidem), as well as a “dialectical vision of a creative, dynamic, humanistic technology” (Rothschild 1981). Artworks like Recognition, Forms of Attraction, MoSAIC, This Recommendation is Broken, Museum Marginalia, Igùn, and Deep Fakes: Art and Its Double benefit sustainable development by projecting “alternative technologies and alternative modes of social and econo-political organization” (ibidem) within a feminist, decolonial perspective on culture, technology, and the future of sustainability. At the same time, it is important to clarify that the projects included in this paper are mainly based at US and European museums. For this reason, the corpus is to be interpreted as indicative of a small subset of a broader and yet-to-be-explored research-creation movement that is tied to access to funding, digitization and computational resources. It is important to acknowledge the limitations of this methodology that can only thrive in rich institutional environment. Nevertheless, the hope is that this same movement will foster algorithmic art projects for cultural sustainability outside of North-America and Europe, through the establishment of funded programs for both the preservation of archival records and the subvention of art- and-research residencies to critically, inventively maintain and reuse museums’ collections. While experiments in AI and the Arts have already been scaled up thanks to initiatives like Google Arts & Culture and projects like X Degrees of Separation,5 there is still a gap in financing computational and algorithmic art projects outside of certain geographic and institutional areas. In relation to algorithmic art as a means to achieve inter-generational and inter- temporal distributive justice in sociocultural environments, it is also important to ask to which extent past and present or present and future can be traded off to make up for widespread social imbalance and finally determine a state of equity, fairness, justice. In other words, “if distributive justice is defined in terms of opportunity, or in terms of outcome, what inter-temporal opportunities, or outcomes, are just?” (Areskoug 1976: 1). Since the first draft of the UN Recommendation on the ethics of artificial intelligence was released in 2020, followed by a Resource Guide on AI Strategies in 2021, the UN showed an increasing commitment in defining fair regulations and ethical principles to ensure the implementation of safe and beneficial AI applications for the society and counter potential harms. If a methodology based
https://artsexperiments.withgoogle.com/xdegrees/8gHu5Z5RF4BsNg/BgHD_Fxb-V_K3A
5
342
G. Taurino
on algorithmic art cannot yet tell us enough about the how-tos of fair AI practice in a variety of contexts, it still represents a viable alternative to a system in crisis, being it cultural or technological, economic or political, in the same way that “sustainability represents the search for a way out of ‘unsustainability’” (Kagan 2011: 23).
References Areskoug K. 1976. The Intertemporal Dimension of Distributive Justice. Reason Papers No. 3, 1–12. Astobiza, Aníbal Monasterio, Mario Toboso, Manuel Aparicio, and Daniel López. 2021. AI Ethics for Sustainable Development Goals. IEEE Technology and Society Magazine 40 (2): 66–71. https://doi.org/10.1109/MTS.2021.3056294. Axelsson, Robert, Per Angelstam, Erik Degerman, Sara Teitelbaum, Kjell Andersson, Marine Elbakidze, and Marcus K. Drotz. 2013. Social and Cultural Sustainability: Criteria, Indicators, Verifier Variables for Measurement and Maps for Visualization to Support Planning. Ambio 42 (2): 215–228. https://doi.org/10.1007/s13280-012-0376-0. Azoulay, Ariella Aïsha. 2019. Potential History: Unlearning Imperialism. London: Verso Books. Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. New York: Wiley. Braidotti, R. 2006. Posthuman, All Too Human: Towards a New Process Ontology. Theory, Culture & Society, 23(7–8), 197–208. https://doi.org/10.1177/0263276406069232. Braidotti, Rosi. 2019. A Theoretical Framework for the Critical Posthumanities. Theory, Culture & Society 36 (6): 31–61. https://doi.org/10.1177/0263276418771486. Bourdieu, P. 1986. The forms of capital. In J. Richardson (Ed.) Handbook of Theory and Research for the Sociology of Education. New York: Greenwood 241–258. Buolamwini, J., Gebru, T. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81:1–15, Conference on Fairness, Accountability, and Transparency. Chapman, Owen B., and Kim Sawchuk. 2012. Research-Creation: Intervention, Analysis and ‘Family Resemblances’. Canadian Journal of Communication 37 (1). https://doi.org/10.22230/ cjc.2012v37n1a2489. Cohn, Jonathan. 2019. The Burden of Choice: Recommendations, Subversion, and Algorithmic Culture. New Brunswick: Rutgers University Press. https://www.degruyter.com/document/ doi/10.36019/9780813597850/html. Cooley, Mike. 1987. Human Centred Systems: An Urgent Problem for Systems Designers. AI and Society 1 (1): 37–46. https://doi.org/10.1007/BF01905888. Costanza-Chock, Sasha. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. Information Policy. Cambridge: The MIT Press. Crawford, Kate. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Cambridge: MIT Press. Dafoe, Allan, Yoram Bachrach, Gillian Hadfield, Eric Horvitz, Kate Larson, and Thore Graepel. 2021. Cooperative AI: Machines Must Learn to Find Common Ground. Nature 593 (7857): 33–36. https://doi.org/10.1038/d41586-021-01170-0. Dunne, Anthony, and Fiona Raby. 2013. Speculative Everything: Design, Fiction, and Social Dreaming. Cambridge: MIT Press. Floridi, Luciano, Josh Cowls, Thomas C. King, and Mariarosaria Taddeo. 2020. How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics 26 (3): 1771–1796. https://doi.org/10.1007/s11948-020-00213-5.
Algorithmic Art and Cultural Sustainability in the Museum Sector
343
Friedman, Batya, Peter H. Kahn Jr., and Alan Borning. 2008. Value Sensitive Design and Information Systems. In The Handbook of Information and Computer Ethics, 69–101. Hoboken: Wiley. https://doi.org/10.1002/9780470281819.ch4. Frodeman, Robert, ed. 2017. The Oxford Handbook of Interdisciplinarity. Vol. 1. Oxford: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198733522.001.0001. Gill, Amandeep S., and Stefan Germann. 2021. Conceptual and Normative Approaches to AI Governance for a Global Digital Ecosystem Supportive of the UN Sustainable Development Goals (SDGs). AI and Ethics, May. https://doi.org/10.1007/s43681-021-00058-z. Goddard, Valentine. 2022. Art Shaped AI: Value Creation in the Digital Era. https://valentinegoddard.medium.com/art-shaped-ai-value-creation-in-the-digital-era-94694f1cea8b. Goddard, V., D. Harris, J. Reyes, G. Taurino, S. Ratté, and M. Marta Kersten-Oertel. 2022. Algorithmic Art to Counter Gender Bias in Artificial Intelligence: Changing AI’s Mis-Pear- Ceptions of Us. Transformations Journal, submitted for publication. Hall, Stuart. 1999. Un-Settling ‘The Heritage’, Re-Imagining the Post-Nation Whose Heritage? Third Text 13 (49): 3–13. https://doi.org/10.1080/09528829908576818. Hamilton, Mark, Stephanie Fu, Mindren Lu, Johnny Bui, Darius Bopp, Zhenbang Chen, Felix Tran, et al. 2021. MosAIc: Finding Artistic Connections across Culture with Conditional Image Retrieval. ArXiv:2007.07177 [Cs, Stat], February. http://arxiv.org/abs/2007.07177. Haraway, Donna J. 2016. Staying with the Trouble: Making Kin in the Chthulucene, August. https://doi.org/10.1215/9780822373780. Hawkes, Jon. 2001. The Fourth Pillar of Sustainability: Culture’s Essential Role in Public Planning. Champaign: Common Ground. Hayles, N. Katherine. 2017. Unthought: The Power of the Cognitive Nonconscious. Chicago: University of Chicago Press. Jo, Eun Seo, and Timnit Gebru. 2020. Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January, 306–316. https://doi.org/10.1145/3351095.3372829. Kagan, Sacha. 2011. Art and Sustainability: Connecting Patterns for a Culture of Complexity. Image, v. 25. Bielefeld: New Brunswick: Transcript; [Distributed by] Transaction Publishers. Kashmere, Brett. 2010. Introduction: Cache Rules Everything Around Me. Counter-Archive. INCITE Journal of Experimental Media, no. 2. http://www.incite-online.net/intro2.html. Khamis, Alaa, Howard Li, Edson Prestes, and Tamas Haidegger. 2019. AI: A Key Enabler of Sustainable Development Goals, Part 1 [Industry Activities]. IEEE Robotics Automation Magazine 26 (3): 95–102. https://doi.org/10.1109/MRA.2019.2928738. Klein L. F. 2013. “The Image of Absence: Archival Silence, Data Visualization, and James Hemings.” American Literature 1. 85 (4): 661–688. https://doi.org/10.1215/00029831-2367310. Kurtas, Susan. n.d. Research Guides: UN Documentation: Development: Introduction. Accessed 22 Nov 2021. https://research.un.org/en/docs/dev/intro. Leavy, Susan, Eugenia Siapera, and Barry O’Sullivan. 2021. Ethical Data Curation for AI: An Approach Based on Feminist Epistemology and Critical Theories of Race. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 695–703. Virtual Event USA: ACM. https://doi.org/10.1145/3461702.3462598. Liu et al. 2021. https://sdgs.un.org/documents/resource-guide-artificial-intelligence-ai-strategies-25128. Loach, Kirsten, Jennifer Rowley, and Jillian Griffiths. 2017. Cultural Sustainability as a Strategy for the Survival of Museums and Libraries. International Journal of Cultural Policy 23 (2): 186–198. https://doi.org/10.1080/10286632.2016.1184657. Maeda, John. 2004. Creative Code. New York: Thames & Hudson. Maheri, Alireza, Shahin Jalili, Yousef Hosseinzadeh, Reza Khani, and Mirreza Miryahyavi. 2021. A Comprehensive Survey on Cultural Algorithms. Swarm and Evolutionary Computation 62 (April): 100846. https://doi.org/10.1016/j.swevo.2021.100846. Merewether, C. 2006. The Archive. Cambridge: MIT Press.
344
G. Taurino
Mickov, Biljana, and James Doyle (Eds.). 2013. Sustaining Cultural Development: Unified Systems and New Governance in Cultural Life. London: Routledge. Morin, E. 2009. Changer le rapport de l’homme à la nature n’est qu’un début. Le Monde. Morley, Jessica, Luciano Floridi, Libby Kinsey, and Anat Elhalal. 2020. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics 26 (4): 2141–2168. https://doi. org/10.1007/s11948-019-00165-5. Napier, D., M.H. Depledge, M. Knipper, R. Lovell, E. Ponarin, E. Sanabria, and F. Thomas. 2017. Culture Matters: Using a Cultural Contexts of Health Approach to Enhance Policy-Making. Report. World Health Organization Regional Office for Europe. https://ore.exeter.ac.uk/ repository/handle/10871/31607. Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press. O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown. Pencarelli, Tonino, Mara Cerquetti, and Simone Splendiani. 2016. The Sustainable Management of Museums: An Italian Perspective. Tourism and Hospitality Management 22 (1): 29–46. Pop, Izabela Luiza, Anca Borza, Anuța Buiga, Diana Ighian, and Rita Toader. 2019. Achieving Cultural Sustainability in Museums: A Step Toward Sustainable Development. Sustainability 11 (4): 970. https://doi.org/10.3390/su11040970. Reynolds, Robert. 1994. An Introduction to Cultural Algorithms. Reynolds, Robert G. 2020. Cultural Algorithms: Tools to Model Complex Dynamic Social Systems, IEEE Press Series on Computational Intelligence. Hoboken: Wiley. Reynolds, Robert G., Yousof A. Gawasmeh, and Areej Salaymeh. 2015. The Impact of Subcultures in Cultural Algorithm Problem Solving. In 2015 IEEE Symposium Series on Computational Intelligence, 1876–1884. Cape Town: IEEE. https://doi.org/10.1109/SSCI.2015.261. Riskin, Jessica. 2007. Genesis Redux: Essays in the History and Philosophy of Artificial Life. Chicago: University of Chicago Press. https://doi.org/10.7208/chicago/9780226720838. 001.0001. Rothschild, Joan A. 1981. A Feminist Perspective on Technology and the Future. Women’s Studies International Quarterly, Women in Futures Research 4 (1): 65–74. https://doi.org/10.1016/ S0148-0685(81)96373-9. Sætra, Henrik Skaug. 2021. AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability 13 (4): 1738. https://doi. org/10.3390/su13041738. Seaver, N. 2017. Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717738104. Stylianou-Lambert, Theopisti, Nikolaos Boukas, and Marina Christodoulou-Yerali. 2014. Museums and Cultural Sustainability: Stakeholders, Forces, and Cultural Policies. International Journal of Cultural Policy 20 (5): 566–587. https://doi.org/10.1080/10286632.2013.874420. Swanson, Kristen K., and Constance DeVereaux. 2017. A Theoretical Framework for Sustaining Culture: Culturally Sustainable Entrepreneurship. Annals of Tourism Research 62 (January): 78–88. https://doi.org/10.1016/j.annals.2016.12.003. Taurino, G. 2022. “The Brokenness in Our Recommendation Systems: Computational Art for an Ethical Use of A.I.” In: Dingli, A., Pfeiffer, A., Serada, A., Bugeja, M., Bezzina, S. (eds) Disruptive Technologies in Media, Arts and Design. ICISN 2021. Lecture Notes in Networks and Systems, vol 382. Springer, Cham. https://doi.org/10.1007/978-3-030-93780-5_11 Taylor, Joel. 2013. Intergenerational Justice: A Useful Perspective for Heritage Conservation. CeROArt. Conservation, Exposition, Restauration d’Objets d’Art, no. HS (September). https:// doi.org/10.4000/ceroart.3510.
Algorithmic Art and Cultural Sustainability in the Museum Sector
345
Throsby, David. 2002. Cultural Capital and Sustainability Concepts in the Economics of Cultural Heritage: Economics of Cultural Heritage. In Assessing the Values of Cultural Heritage, ed. Marta de la Torre, 101–117. Los Angeles: Getty Conservation Institute. UNESCO. 2016. Culture: Urban Future: Global Report on Culture for Sustainable Urban Development. Paris: UNESCO Publishing. ———. 2018. Culture for the 2030 Agenda. Paris: UNESCO Publishing. ———. 2019. Culture | 2030 Indicators. Paris: UNESCO Publishing. Vaio, Di, Rosa Palladino Assunta, Rohail Hassan, and Octavio Escobar. 2020. Artificial Intelligence and Business Models in the Sustainable Development Goals Perspective: A Systematic Literature Review. Journal of Business Research 121 (December): 283–314. https://doi. org/10.1016/j.jbusres.2020.08.019. Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 233. https://doi.org/10.1038/s41467-019-14108-y. Zheng, Xinzhu, Ranran Wang, Arjen Y. Hoekstra, Maarten S. Krol, Yaxin Zhang, Kaidi Guo, Mukul Sanwal, et al. 2021. Consideration of Culture Is Vital If We Are to Achieve the Sustainable Development Goals. One Earth 4 (2): 307–319. https://doi.org/10.1016/j.oneear.2021.01.012.
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable Development Goals Malahat Ghoreishi, Luke Treves, Roman Teplov, and Mikko Pynnönen
Abstract Circular economy (CE) business models provide solutions for sustainable development goals by closing, slowing, intensifying, de-materializing, and narrowing resource loops. Addressing CE needs a strategic redefinition of the way companies are creating and capturing value, which leads in designing a new value creation system and innovative business models. For a full circular value creation, circular products should be designed and developed specifically for repair, refurbishing, and remanufacturing purposes to close the loop. Hence, companies need to radically change their business models and the way they create value towards more innovative solutions based on CE strategies. However, implementation of circular business models in business practices has not been widely utilized and requires fundamental changes within the value chain. In a successful circular value creation, a higher degree of transparency and high-quality data for the entire value chain is required for further development of the products and processes, hence enabling design optimization and management of supply chain. Recent debates show that artificial intelligence (AI) can be considered as an enabler of CE to help companies in innovating circular business models. Different applications of AI such as machine learning, automation and robotics, and machine visions have the capability of collecting, analyzing, and storing digital data. AI-enhanced products and services can tackle environmental problems through independent interactions with their surroundings and self-learning capabilities, which results in improved environmental performance characteristics. In this chapter, we identify the role of AI in circular value creation for sustainable development goals.
M. Ghoreishi (*) LUT School of Business and Management, LUT University, Lappeenranta, Finland Faculty of Technology, LAB Universtiy of Applied Sciences, Lappeenranta, Finland e-mail: [email protected] L. Treves · R. Teplov · M. Pynnönen LUT School of Business and Management, LUT University, Lappeenranta, Finland e-mail: [email protected]; [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_19
347
348
M. Ghoreishi et al.
Keywords Artificial intelligence · Sustainable Development Goals · Business model · Circular economy · Industry 4.0 · Value creation
1 Introduction Sustainability has been discussed widely since the topic was identified by Brundtland in 1987 (Commission on Environment and Development 1987). To promote sustainable development approach, the United Nation established 17 Sustainable Development Goals (SDGs) in 2015 as a universal action to stop poverty, protect the planet, and increase peace and prosperity (United Nations 2015). However, despite the significant efforts made by different nations, the successful achievement of goals objective is often hampered by the economic challenges and profitability issues faced by businesses. The concept of CE has gained attention of researchers, policymakers, and various organizations worldwide as a way of promoting SDGs while enhancing economic development (EM Foundation 2015; Geissdoerfer et al. 2017). The CE model is an alternative to a traditional linear system, which is responsible for current environmental problems, resource depletion, and climate change. CE practices contribute directly to achieving Goal 12 of SDGs (ensure sustainable consumption and production patterns). Integration of CE principles for SDG 12 requires an early-stage consideration in product design and development processes. CE-oriented business models are built on the principles of keeping products and materials in the economy as long as possible with highest value retention. Since the main aim of CE models is to eliminate the use of finite material and energy resources in the entire life cycle of the products and materials, CE models are seen as a potential driver in SDG 12. In CE the value creation is decoupled from consuming limited resources by leveraging sets of regenerative, restorative, and efficient productivity- oriented strategies which keeps products, components, and materials for a longer time with highest possible value (Ellen MacArthur Foundation 2013; Lieder and Rashid 2016). According to Bressanelli et al. (2018), implementing CE strategies creates a net benefit of €1.8 trillion by 2030 in Europe alone while creating new jobs, stimulating innovation, as well as increasing considerable environmental benefits. CE is the core in the UN SDGs (Schroeder et al. 2019) which can be achieved through reducing structural waste which leads to decrease the demand for limited virgin material and transformation from consumption of the natural resources that still have a useful life but would otherwise be sent to landfill (Hysa et al. 2020). Despite of all the benefits CE can bring to the environment, society, and government (ESG), the adoption of CE strategies and how organizations create, deliver, and capture value in CE is still uncertain, and only minor improvements have been recognized in decoupling from linear resource consumption (Whicher et al. 2018). The recent report by Sitra (2021) highlights the core role of data in transition towards CE. In a CE, data on resource flows, location tracking, monitoring condition and quality, real-time data gathering, processing of input-output flows, precise prediction, lower production downtime, and optimization of energy consumption
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable…
349
are essential (Hughes et al. 2021). CE requires a strong integration and connection of the value chain which brings data economy at the center when considering development of CE solutions. Since Industry 4.0 technologies are capable of collecting, storing, analyzing, and processing large amount of data, they can position such data and information flows to enable resource and energy efficiency towards a more sustainable CE (Ellen MacArthur Foundation 2019; Kristoffersen et al. 2020; Ramadoss et al. 2018; Lacy et al. 2020). Industry 4.0 technologies such as Internet of Things (IoT), artificial intelligence (AI), and big data are considered as digital technologies which are enablers of CE. According to Yoo et al. (2010), Industry 4.0 technologies share three essential characteristics which distinct digital technologies from other technologies are (1) that they are programmable, (2) creating, sharing and capturing data as a homogeneous source, and (3) reinforcing each other by self- referencing. Such technologies have critical roles in CE by creating precise data for improving resource management and efficient decision-making, as well as tracking the flow of products, components, and materials throughout all the stages of industrial life cycle (Antikainen et al. 2018; Nascimento et al. 2019; Lacy et al. 2020; Bressanelli et al. 2018). However, there is still a lack of concrete guidance on how to leverage Industry 4.0 technologies to support CE strategies which offer novel opportunities for business leaders (Kristoffersen et al. 2020). The challenge of moving towards a full CE is substantial, and the world is only at the early stages, for which Industry 4.0 technologies can maximize the transformation of business models, products, and services for a more durable and sustainable outcomes and help organizations to overcome challenges and simultaneously remain competitive in sustainability aspects in business. Hence, the primary objective of this work is to investigate how CE and digitalization affect business model innovation. The remainder of this book chapter is organized as follows: next sections respectively investigate the key role of AI in CE and how AI can be utilized to leverage circular value creation with supporting business cases.
2 AI in CE Industry 4.0 technologies and its supporting systems provide integrated tools that can help tackle these issues by providing improved “any-time,” “any-where” for “any-thing,” tracking, and insights of their business processes (Lee 2018). This has the potential to transform how businesses implement their sustainability strategies. Consequently, companies are increasingly developing new business models that focus on reuse, repair, and remanufacturing of their products and services (Melander and Pazirandeh 2019). The combination of digital transformation, the circular economy, and business model innovation presents a huge opportunity for businesses to create and capture new value. Digital technologies can play role in three different CE business model innovations: (1) as tools to support, identify, and implement business models related to strategies, patterns, and components (Lewandowski
350
M. Ghoreishi et al.
2016; Bocken et al. 2016), (2) to support the implementation of managerial practices for CE transitions in companies (Centobelli et al. 2020; Ünal et al. 2019), and (3) to offer service-based business models in which a product is replaced by a service supported by machine intelligence (Alcayaga et al. 2019; Tukker 2015). According to (Berg et al. 2020), the digital technologies can enable CE as follows: • Digital technologies which enable more efficient and circular manufacturing processes of materials and products such as intelligent design, sensor technologies, machine learning, robotics, etc. • Digital technologies which enable tracking and tracing the products and components, optimization of value chain, product and service development, and increasing reuse, repair, and refurbishment such as IoT, block chain, etc. • Digital technologies which connect consumers and producers and enable service development and dematerialization such as AI-powered platforms AI as one of the Industry 4.0 technologies describe self-learning and self- correcting computational processes that mimic human-like reasoning and problem- solving (Kok et al. 2009). AI techniques respond to their environment through cognitive capabilities and intelligent capabilities (Townsend and Hunt 2019). The main benefit of AI techniques lies in their capacity to collect, process, and analyze large quantities of data in short and up to real time from various sources (Mühlroth and Grottke 2020). Apart from handling superior quantities of data, AI techniques detect and unveil patterns that were not visible before, to suggest relations humans are not aware of. Furthermore, AI techniques automatically deduct consequences based on its analysis and matches data input to a connected task (Balasubramanian et al. 2020). These capacities make use cases in the context of CE evident. AI techniques can support circularity of the whole value chain by consumption and demand prediction, smart product design, and enabling/enhancing remanufacturing processes by remote monitoring (Ghoreishi and Happonen 2020b). AI techniques can enable CE opportunities by boosting circular product design and development, optimizing infrastructure to ensure the flows of such products, and operating circular business models (Ellen MacArthur Foundation 2019). According to Ghoreishi and Happonen (2020a), AI can enhance the value of recycled and recovered materials by smart waste sorting. AI-based platforms can enable product and material sharing which enhances extension of product’s life cycle (Waheed and Khalid 2019). Different roles of AI techniques in different cycles of CE and customer support phase are illustrated in Fig. 1. AI can operate circular business models by introducing new business propositions such as asset sharing, product as a service, potentials to cut inventory levels, and AI-based platform. Dynamic pricing and matching algorithms can enable the sharing and access business models whereas in revers logistics and remanufacturing requires a powerful AI-based analytical model to collect customer and product data and transfer it as a feasible decision-making model (Ellen MacArthur Foundation 2019). An example of utilization of AI circular business model for AI-based platform is Israeli startup Algoretail (2021) which uses machine learning to automate grocery
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable…
351
Fig. 1 Role of AI in CE. (Based on Ghoreishi and Happonen 2020a)
retail stocking procedures from the supplier onto to the store shelves. The automated AI-powered replenishment tool, Algoretail IO, offers a data-driven sales forecasting which helps in reducing waste of fresh food items. In addition, Algoretail IO provides granular reporting with graphic representations of insights along with customizable alerts regarding products that are about to expire. This way by utilizing AI, the startup helps groceries in reducing 35% waste reduction as well as 15% increase in their net profit. An example of product as a service business model is the American startup Smarter Sorting (2021), which develops a cloud-based software as a service (SaaS) for waste management platform for retailers. AI is utilized to provide real-time data and up-to-date information on inventories, such as the attributes of a product and its packaging. This allows retailers to better understand which products are eligible for recycling, which are suitable for donations, and so on. These insights cut waste volumes and reduce disposal costs. In addition, full transparency into the data and trends driving the retail operations and ensuring compliance and operational efficiency is provided by store and item-level analytics which increases business success.
352
M. Ghoreishi et al.
3 Circular Value Creation Since circularity aims to maintain the functionality of the materials and components at their maximum level as long as possible throughout their entire life cycle, material stocks and flows are required to be managed in a sustainable way. To incorporate sustainability and circularity into products, product designers require the right tools, including measurement frameworks and tools that integrate metrics and indicators, which are essential to maximize value creation from products and materials (Van den Berg and Bakker 2015). Most quantitative sustainability tools which are currently utilized by businesses are based on life cycle assessment (LCA), in which the environmental impacts of components and materials are assessed along parts of its life cycle (Ramani et al. 2010). Although life cycle assessment, life cycle inventory, and the current methods of measuring product sustainability are useful, they are more suitable for post-design evaluations of completed products (Hapuwatte and Jawahir 2019). A model-based methodology is required in the design phase to predict how sustainability decisions will affect the product during its whole life cycle (Hapuwatte et al. 2017; Hapuwatte and Jawahir 2019). When designing and developing new product, once the specifications of the products are made, minor changes can be made since resources, infrastructures, and activities have been allocated to a certain design (Bocken et al. 2014a). Van den Berg and Bakke (2015) distinguish the key features of circular products as “future proof, disassembly, maintenance, remake and recycling.” To achieve circular value creation, organizations need to design strategies which support value in circular business model such as (1) utilizing material resources and energy efficiently (narrowing the loop), (2) producing products that are natural, reliable, durable, with the focus on life extension through the standardization and compatibility, upgradability and adaptability, and dis- reassembly of product’s individual physical components (slowing the loop), (3) reusing products, components and materials through dis – reassembly principles, design for recycling and remanufacturing as well as design for environment with the focus on technological and biological cycles (closing the loop), and use non-toxic materials and renewable energies (regenerate the loop) (Bocken et al. 2014a). Value creation in CE occurs through the recovery of returned products within the supply chain in closing the loop (Schenkel et al. 2015). Therefore, it is essential to develop products in a way that the materials can remain in the loop and be continuously and safely recycled into raw materials for manufacturing new products (Bocken et al. 2016). In this way by adding value from the forward and reverse supply chain, CE business models can leverage the process of circular value creation for costumers, environment, economy, and information value. For this reason, support of all the partners within the supply chains in developing awareness and new skills is essential in rendering business models for circular value creation (Ünal et al. 2019). For a continuous flow of resources in CE, firms need to radically change their business model for innovative product design. Based on assertions and design
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable…
353
principles for sustainability, circular design should focus on circular supplies, resource conservation, multiple cycles, long-life use, and system change. This implies firms to innovate their business model strategies to create and capture new value and develop competitive advantages from CE and circular product design (Linder and Williander 2017; Urbinati et al. 2017; Lin 2018). For instance, “slowing the loops” strategies can allow firms to encourage their customers to make efficient choices by reducing their consumption habits while extending and exploiting their product’s residual value. Examples include the leasing of products, car sharing, and clothes return initiatives. While “closing the loops” strategies seek to extend resource value through exploiting their residual value and industrial symbiosis that use residual outputs from one process. Examples include the collection, supply, and recycling of products (Bocken et al. 2014b). According to Mishra et al. (2018), opportunities for circular value creation can be analyzed based on four broad archetypes as follows: • “Inner value creation loops, which concerns maintaining of the integrity of products as highest level through service and maintenance. • Extending value creation loops, which concerns the use of products and materials for longer time. • Cascading value creation loops, which concerns cascading use in adjacent value chains (where the costs of reused products and materials are lower or have superior value compared to virgin or non-renewable materials). • Pure value creation loops, which concerns the creation of pure, high-quality feedstock at the outset (avoiding contamination and toxicity to allow for reuse and cost avoidance of clean up or purification).” Implementing these archetypes in specific business models take many forms such as performance and servitization-based models, product-service systems, and collaborative consumption (Ghisellini et al. 2016). Considering CE-oriented business models adds uncertainty and complexities to conventional business models; the addition of reverse logistics; the return of resources at what time, quantity, and quality; as well as customer perceptions would be useful in developing a CBM (Bocken et al. 2018). According to Urbinati et al. (2017), circular principles can be integrated into business models in three ways: downstream circular (new schemes, customer interfaces make alternatives for value), upstream circular (change the systems for value creation), or fully circular (a combination of downstream and upstream principles). In the context of current business models, AI techniques have great potential to create circular value (Ellen MacArthur Foundation 2019).
4 AI and Circular Value Creation AI is one of the key enablers of circular business models that can enhance and accelerate CE through product design and development processes which are assisted by machine learning for faster testing and prototyping (Ellen MacArthur Foundation
354
M. Ghoreishi et al.
2019). AI techniques can integrate real-time and historical data from products and users which helps to increase circulation of the products for a longer period of time by predicting precise price and demand, maintenance services, and smart inventory management. Enyoghasi and Badurdeen (2021) state that assessing the demand through AI enables optimized decisions regarding material reusability. EMF (2019) emphasize this point further, affirming that AI supports CE implementation by improving reverse logistics and associated decision-making processes in sorting and disassembling and by utilizing both historical and real-time data to predict demand and thus optimize inventory and production management (Ellen MacArthur Foundation 2019). Focusing on the CE micro level which involves companies, products, and consumers, the role of AI can be analyzed based on the regenerative, share, optimize, loop, virtualize, and exchange (ReSOLVE) framework (Jabbour et al. 2018) presented in Table 1. Recent models of “human-only,” “human-machine,” and “machine-only” decision-making are shifting the way organizations learned and evolved innovation based on wide range of AI applications (Daugherty and Wilson 2018). According to Brem et al. (2021), AI has two main roles in transforming innovation as originator and facilitator of innovation. As an originator, AI shapes the creation of products and processes and is the starting point for innovation where the product portfolio is based on software and company maturity is emerging. As a facilitator, AI augments existing products and processes and is the starting point for transformation where the product portfolio is based on hardware and company maturity is established. AI-driven business models can play a significant role in achieving SDGs through their ability to make alternative ownership options a reality (Di Vaio et al., 2020) that focus on providing access to intangible outcome-based services or the combination of tangible products and intangible services such as use- or result-orientated business models, rather than product-orientated models. Most of these offerings can be categorized into the product-oriented (PO), use-oriented (UO), and Table 1 Capabilities of AI based on ReSOLVE framework in CE principles (developed by authors) Framework component Regenerate Share
Circular value creation initiatives Return recovered materials and resources
Virtualize
Share assets, reuse secondhand products, maintenance services Digitally connected supply chain, digital product passport Remanufacturing products or components, recycling and upcycling products Dematerialize indirectly (e.g., online shops)
Exchange
Intelligent design and prototyping
Optimize Loop
AI techniques AI-powered robots, AI-based software AI-based platforms, machine learning AI-based platforms, online databases Robotics, machine vision, machine learning Intelligent automation platforms Machine learning and algorithms
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable…
355
result-oriented (RO) business models, which are considered as the main types of product/service business models (Reim et al. 2015). In these scenarios, the main focus of PO is to sell a tangible product with additional services. The focus also remains on consumer ownership and the consumption of resources similar to in the past. As a consequence, PO models are considered to contribute less to achieving SDGs. Alternatively, in UO and RO, we see a shift towards more sustainable consumption through alternative business models that focus on the “stewardship” of tangible and intangible product-services. Specifically, in a UO model, while the product is still central to an offering, rather than selling it to the customer, the access and usage of the product are guaranteed by the provider for the specific period of time and are subscription based paid by the user. Further, in RO, the result is what the customer pays for rather than a product, and the supplier is fully responsible for the result. Table 2 provides a comparison of three different business models in terms of value creation, value delivery, and value capturing. To illustrate the role of AI in different business models and their value provision process, we selected three cases, each presenting one business model type. For PO model, we selected a textile industry company Unspun. Unspun is a textile industry company that utilizes 3D body scanning to produce sustainable tailor-made jeans for customers. The example case for UO model is Naava, a company which offers air-purifying design plant walls for office spaces. As a RO model, we selected company called Augury. Augury provides machine health solutions for industry. The value process of the cases is analyzed in Table 3, and the role of AI is highlighted in each phase. AI is utilized for circular value creation in different ways in each case. In PO case Unspun, the role of AI in value creation is to scan and collect accurate data of customer’s body to enable the tailor-made jeans. There is an option of a remote measurement. In both UO and RO cases, the main role of AI is in the remote condition monitoring. In UO case Naava, AI keeps the plant wall operating in optimum way and automatically, so that the customer doesn’t have to. In RO case Augury, AI Table 2 Comparison of AI-facilitated SDG high-level business model categories in terms of value creation, value delivery, and value capturing Value creation
Value delivery
Value capturing
Product-oriented Provider is responsible for agreed services
Provider is responsible for selling and providing services products Customer pays for physical product and for the performed services
Source: Reim et al. (2015)
Use-oriented Provider takes responsibility of the usability of products and services Provider guarantees the usability of the physical product along with services
Result-oriented Provider takes responsibility for delivering results
Customer can make continuous payments over time (e.g., leasing)
Customer payments are based on outcome units, that is, they pay for the result
Provider is responsible for delivering results
356
M. Ghoreishi et al.
Table 3 Case example analysis based on different roles of AI in circular value creation (developed by authors) Product-oriented (Unspun) Value Aims to reduce waste creation by offering sustainable jeans that are tailored individually for perfect fit The role of AI: Data collection The customers’ body is scanned by AI-powered 3D scanners
Value The body scan data is delivery turned to digital jeans. These are then manufactured by robotics-powered sewing machines The role of AI: The AI algorithms are used to digitally design the jeans around customer’s 3D avatar
Value capture
Order delivery model The technology enables a zero-inventory model and reduces waste The role of AI: AI enables the made-to-order model
Use-oriented (Naava) Offers smart wall of plants that constantly purify indoor air and provide a constant stream of clean air The role of AI: Remote condition management system that keeps the plants in optimum condition by connecting the sensor data of airflow, water, and light and, e.g., adopting to weather data An operating system for remote management and a full maintenance service The role of AI: Full operation and maintenance with automated AI-operated system
Monthly service fee The role of AI: Optimize the service and maintenance operating costs for provider
Result-oriented (Augury) Offers an asset management and maintenance service for industrial machines The role of AI: The system uses sensors and AI for monitoring and detecting mechanical errors in machines and to provide preventive maintenance
The sensors collect data from the machines. The data is stored into cloud and analyzed for problems to prevent failures and manage the maintenance The role of AI: The sensors measure the vibrations, magnetisms, and temperatures in real time. AI analyzes the abnormalities in the data and provides instructions and prioritized action points End-to-end service with a warranty for broken machines The role of AI: Provides accurate and reliable information on machine health
“listens” to the machine data to detect anomalies and to preventatively inform the customer of potential failures. The aim is to eliminate downtime and to optimize the production. Moreover, the role of AI in value delivery of the cases varies. In the PO case Unspun, the role of AI is to design a 3D avatar of the customer’s data points and utilize this in fitting the jeans as well as optimizing the production. In UO case Naava, AI has the most central role among these cases. AI handles automatically the remote monitoring and remote operation of the plant wall and notifies the maintenance teams automatically. In Augury’s RO model, AI-based remote monitoring and preventative maintenance are the key roles, but AI does not handle the operation. The value capture model and the role of AI in it differ most among the cases. In Unspun’s PO model, the value capture model is the basic order-delivery model,
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable…
357
for which AI is the key enabler, whereas in Naava’s UO model, the value capture model is a product-as-a-service leasing model (the wall can also be purchased) with a monthly fee. The major role of AI in this case relates to optimizing the provider’s maintenance costs. In the RO case Augury, the role of AI is the biggest in value capture. The unplanned downtime is very costly in manufacturing, and to prevent this, the companies are willing to pay. Augury’s AI technology is very accurate and constantly learns from all their cases which optimizes prediction and prevention of downtimes. The company also provides insurance for customers in case that a machine gets broken even though they use the service.
5 Discussions Technologies like AI and its supporting systems facilitate the transition towards more sustainability-oriented business models by bringing together suppliers and demanders of goods, services, as well as environmental and societal factors which are challenging traditional business model thinking (Alstyne et al. 2016). Furthermore, business models in CE and sharing economy model are different from traditional and current business thinking. Such trends aim to change existing business models towards new ways of producing, transporting, consuming, and reusing materials, components, and products/services. Smarter business models will enable higher efficiency of resource consumption as well as customization of products/ services in a way that can improve the offering to customers while reducing their environmental footprint and influencing the positive behavior of network partners (Bocken et al. 2014b; Jørgensen et al. 2018). This can result in positive sustainability effects through enhanced product usage and replacing products with new, higher efficient, more innovative products and materials (Sundin and Bras 2005). Therefore, the combination of such product and service solutions will lead to of offering the united product and service solutions that have economic, social, and environmental effects. This is especially important as digital platform ecosystems and users are moving from the principle of ownership to stewardship, which is increasingly met by intangible services rather than tangible products as in the past (Reim et al. 2015). In practice, the shift to these business models enabled by AI techniques can contribute to product life extension through service, remote, and predictive maintenance and repair that have the potential to reduce the environmental impact in a product’s life cycle. In addition, it has the potential of slowing resource loops by extending the value chain of products and materials for longer periods and to recover raw materials after the lifetime of the products for their reuse (Bocken and Short 2016) through the development of take-back systems, refurbishment, design for circularity, and recycling (Kristensen and Remmen 2019). Building upon the theme of digital enhancement, AI techniques (Breidbach et al. 2014; Li and Found 2017; Storbacka et al. 2016) that influence the development of a product bring together a network of interconnected actors and objects that work in coopetition to create and capture mutual value (Akaka and Vargo 2015; Vargo and
358
M. Ghoreishi et al.
Lusch 2004). This can result in the key challenge of conceptualizing and capturing these interactions and engagements across numerous technological contexts (Breidbach and Brodie 2017; Li and Found 2017). Emergent technologies like AI and connected technologies, including IoT and smart sensors, can facilitate this through their ability to automatically and autonomously collect, analyze, interpret, and integrate elements from the physical world and computer-/internet-based systems into their offerings. This can result in improvements in efficiency, accuracy, and economic benefits for providers of products and services and their consumers through both parties’ ability to provide and collect data anytime, anywhere, on anything, helping providers to continuously improve product design, including the ability to enhance its durability, and enabling the components of a tangible product to remain in use longer leading to higher return on investment and enhancing user experience and engagement. AI supported by IoT technologies and smart sensors can enhance this further through its ability to allow digital platform ecosystems to monitor a product’s component condition, location, and status which supports product sharing between multiple users. In turn, these outcomes can be used to improve strategies of recovery such as remanufacturing, reusing, and recycling of physical items (Alcayaga et al. 2019), and where intangible services can provide a viable alternative to tangible products. This enables providers and their complementor/ supply networks to make precise estimations on physical elements of a product’s useful life cycle, supports decisions on optimal remanufacturing time of a certain product, and can improve the profitability of remanufacturing activities (Ingemarsdotter et al. 2020). This is achieved through the ability to make better assessments of a product’s condition and take preventative actions to extend its life cycle.
6 Conclusion In this chapter we highlighted the important role of AI in circular value creation for SDGs. Adopting CE requires companies to initiate and develop business models based on circular value creation principles such as remanufacture, dematerialization, sharing, and servitization. This can be best enabled and achieved by utilizing disruptive technologies such as AI. Different applications of AI can be utilized in different circular value creation and accelerate the transition towards a successful CE. A critical enabler in these processes is the business model innovation built upon the principles of CE and SDG which focus on reuse, repair, and remanufacturing of products (Mont et al. 2006) based upon the principles of collaboration/coopetition with their customers, partners/competitors, and suppliers using AI technologies. This can result in different levels of environmental advantages through new products (Melander and Pazirandeh 2019), including higher energy efficiency, lower material consumption, increase in pure materials, lower fuel consumption, prevention of toxic components and materials, and use of digitalization in predicting enhanced usage and integration of more environmentally friendly materials. Repair,
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable…
359
maintenance, reuse, and remanufacturing products are the ways through which companies are able to enhance resource utilization and to prolong the lifetime of a product (Mont et al. 2006; Östlin et al. 2009). For a successful remanufacturing, access to products that can be remanufactured is important, for example, through take-back agreements with customers (Östlin et al. 2009). These kinds of agreements can enable recycling, where companies are responsible for the end of life of products (Smith and Crotty 2008). In addition, collaboration with different partners within a company’s ecosystem can lead to saving raw material, improving waste disposal, limiting pollutions, reducing energy consumption, as well as packing and transportation (Manzini and Vezzoli 2003). Since data plays the core role in circular value creation, companies that utilize digital technologies such AI and IoT can enable integration of data into all principles of CE can build more efficient business models and consequently higher efficiency in resource and material usage with lower costs. Therefore, one of the future research focuses recommended by this chapter is assessing the role of Artificial Intelligence of Things (AIoT) in circular value creation for SDFGs. IoT sensors can collect and transfer precise data on product status and conditions which can be further processed and analyzed faster with AI. In an IoT-enhanced environment, AI can close the loops of products and materials, lower energy and resource usage, and therefore enhance circular value creation. However, despite all the potentials offered by digital technologies in creating circular values and SDGs, it needs to be stated that such technologies can simultaneously lead to unsustainable practices. As these practices are significantly technologically driven and revenue-driven following linear production and consumption levels, the introduction of smart technologies and automation may lead to increased consumption behavior, energy use, and environmental impacts as well. Industry 4.0 technologies can lead to a huge environmental footprint and energy intensity. Therefore, the environmental and societal impacts of the digital technologies themselves must be carefully assessed, and circular principles must be embedded in the digital products, as a condition of their deployment in the economy, to ensure a global net positive balance. Digitalization can use circular economy as a guiding principle, a target to reach a sustainable endpoint. On the other hand, digital technologies are dependent on the availability of critical raw materials. Therefore, the challenge will be to develop the digital circular economy in such a way that the digital technologies compensate for the need for materials that they are made of. There is a strong demand to introduce principles of dematerialization, lifetime extension, and recycling into the digital systems that build the circular economy.
References Akaka, Melissa Archpru, and Stephen L. Vargo. 2015. Extending the Context of Service: From Encounters to Ecosystems. Journal of Services Marketing 29: 463–471. https://doi.org/10.1108/ JSM-03-2015-0126.
360
M. Ghoreishi et al.
Alcayaga, Andres, Melanie Wiener, and Erik G. Hansen. 2019. Towards a Framework of Smart- Circular Systems: An Integrative Literature Review. Journal of Cleaner Production 221: 622–634. https://doi.org/10.1016/j.jclepro.2019.02.085. Algoretail.co. 2021. Algoretail. https://www.algoretail.co.il/. Alstyne, Marshall W., G.G. Parker, and S.P. Choudary. 2016. Pipelines, Platforms, and the New Rules of Strategy. Harvard Business Review 94 (4): 54–62. https://hbr.org/2016/04/ pipelines-platforms-and-the-new-rules-of-strategy. Antikainen, Maria, Teuvo Uusitalo, and Päivi Kivikytö-Reponen. 2018. Digitalisation as an Enabler of Circular Economy. Procedia CIRP 73: 45–49. https://doi.org/10.1016/j.procir.2018.04.027. Balasubramanian, Natarajan, Yang Ye, and Xu. Mingtao. 2020. Substituting Human Decision- Making with Machine Learning: Implications for Organizational Learning. Academy of Management Review. https://doi.org/10.5465/amr.2019.0470. Berg, Holger, Kévin Le Blévennec, Eivind Kristoffersen, Bernard Strée, Arnaud Witomski, Nicole Stein, Ton Bastein, Stephan Ramesohl, and Karl Vrancken. 2020. Digital Circular Economy: A Cornerstone of a Sustainable European Industry Transformation. ECERA European Circular Economy Research Alliance. Belgium Enyoghasi, Christian, and Badurdeen Fazleena. 2021. Industry 4.0 for sustainable manufacturing. Opportunities at the product, process, and system levels. Resources, Conservation and Recycling 166. https://doi.org/10.1016/j.resconrec.2020.105362. Bocken, N.M.P., and S.W. Short. 2016. Towards a Sufficiency-Driven Business Model: Experiences and Opportunities. Environmental Innovation and Societal Transitions 18 (March): 41–61. https://doi.org/10.1016/j.eist.2015.07.010. Bocken, N.M.P., M. Farracho, R. Bosworth, and R. Kemp. 2014a. The Front-End of Eco-Innovation for Eco-Innovative Small and Medium Sized Companies. Journal of Engineering and Technology Management – JET-M 31 (1): 43–57. https://doi.org/10.1016/j.jengtecman.2013.10.004. Bocken, N.M.P., S.W. Short, P. Rana, and S. Evans. 2014b. A Literature and Practice Review to Develop Sustainable Business Model Archetypes. Journal of Cleaner Production 65 (September): 42–56. https://doi.org/10.1016/j.jclepro.2013.11.039. Bocken, Nancy M.P., Ingrid de Pauw, Conny Bakker, and Bram van der Grinten. 2016. Product Design and Business Model Strategies for a Circular Economy. Journal of Industrial and Production Engineering 33 (5): 308–320. https://doi.org/10.1080/21681015.2016.1172124. Bocken, N.M.P., C.S.C. Schuit, and C. Kraaijenhagen. 2018. Experimenting with a Circular Business Model: Lessons from Eight Cases. Environmental Innovation and Societal Transitions 28 (February): 79–95. https://doi.org/10.1016/j.eist.2018.02.001. Breidbach, Christoph F., and Roderick J. Brodie. 2017. Engagement Platforms in the Sharing Economy: Conceptual Foundations and Research Directions. Journal of Service Theory and Practice 27 (4): 761–777. https://doi.org/10.1108/JSTP-04-2016-0071. Breidbach, Christoph F., Roderick Brodie, and Linda Hollebeek. 2014. Beyond Virtuality: From Engagement Platforms to Engagement Ecosystems. Managing Service Quality 24 (6): 592–611. https://doi.org/10.1108/MSQ-08-2013-0158. Brem, Alexander, Ferran Giones, and Marcel Werle. 2021. The AI Digital Revolution in Innovation: A Conceptual Framework of Artificial Intelligence Technologies for the Management of Innnovation. IEEE Transactions on Engineering Management: 1–7. https://doi.org/10.1109/ TEM.2021.3109983. Bressanelli, Gianmarco, Federico Adrodegari, Marco Perona, and Nicola Saccani. 2018. The Role of Digital Technologies to Overcome Circular Economy Challenges in PSS Business Models: An Exploratory Case Study. Procedia CIRP 73: 216–221. https://doi.org/10.1016/j. procir.2018.03.322. Centobelli, Piera, Roberto Cerchione, Davide Chiaroni, Pasquale Del Vecchio, and Andrea Urbinati. 2020. Designing Business Models in Circular Economy: A Systematic Literature Review and Research Agenda. Business Strategy and the Environment 29 (4): 1734–1749. https://doi.org/10.1002/bse.2466.
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable…
361
Commission on Environment and Development. 1987. Report of the World Commission on Environment and Development: Our Common Future. Daugherty, P.R., and H.J. Wilson. 2018. Collaborative Intelligence: Humans and AI Are Joining Forces. Harvard Business Review 96 (4): 114–123. Di Vaio, Assunta, Palladino Rosa, Rohail Hassan, and Escobar Octavio. 2020. Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. Journal of Business Research 121: 283–314. https://doi.org/10.1016/j. jbusres.2020.08.019. Ellen MacArthur Foundation. 2013. Towards the Circular Economy Volume 1. Ellen MacArthur Foundation. https://doi.org/10.1162/108819806775545321. Ellen MacArthur. 2019. Artificial Intelligence and the Circular Economy – AI as a Tool to Accelerate the Transition. London. http://www.ellenmacarthurfoundation.org/publications. EM Foundation. 2015. Towards a Circular Economy: Business Rationale for an Accelerated Transition. Greener Management International 20. https://doi.org/2012-04-03. Geissdoerfer, Martin, Paulo Savaget, Nancy M.P. Bocken, and Erik Jan Hultink. 2017. The Circular Economy – A New Sustainability Paradigm? Journal of Cleaner Production 143: 757–768. https://doi.org/10.1016/j.jclepro.2016.12.048. Ghisellini, Patrizia, Catia Cialani, and Sergio Ulgiati. 2016. A Review on Circular Economy: The Expected Transition to a Balanced Interplay of Environmental and Economic Systems. Journal of Cleaner Production 114 (February): 11–32. https://doi.org/10.1016/J. JCLEPRO.2015.09.007. Ghoreishi, Malahat, and Ari Happonen. 2020a. New Promises AI Brings into Circular Economy Accelerated Product Design: A Review on Supporting Literature. E3S Web of Conferences 158. 10.1051/e3sconf/202015806002. Ghoreishi, Malahat., and Happonen, Ari. 2020b. Key Enablers for Deploying Artificial Intelligence for Circular Economy Embracing Sustainable Product Design: Three Case Studies. In 13th International Engineering Research Conference (13TH EURECA 2019), 2233:050008. AIP Publishing. https://doi.org/10.1063/5.0001339 Hapuwatte, Buddhika M., and I.S. Jawahir. 2019. A Total Life Cycle Approach for Developing Predictive Design Methodologies to Optimize Product Performance. Procedia Manufacturing 33: 11–18. https://doi.org/10.1016/j.promfg.2019.04.003. Hapuwatte, B.M., F. Badurdeen, and I.S. Jawahir. 2017. Metrics-Based Integrated Predictive Performance Models for Optimized Sustainable Product Design. Smart Innovation, Systems and Technologies 68: 25–34. https://doi.org/10.1007/978-3-319-57078-5_3. Hysa, Eglantina, Alba Kruja, Naqeeb Ur Rehman, and Rafael Laurenti. 2020. Circular Economy Innovation and Environmental Sustainability Impact on Economic Growth: An Integrated Model for Sustainable Development. Sustainability 12 (12): 4831. https://doi.org/10.3390/ su12124831. Ingemarsdotter, Emilia, Ella Jamsin, and Ruud Balkenende. 2020. Opportunities and Challenges in IoT-Enabled Circular Business Model Implementation – A Case Study. Resources, Conservation and Recycling 162 (November): 105047. https://doi.org/10.1016/j.resconrec.2020.105047. Jabbour, Ana Beatriz, Charbel Jose Chiappetta Jabbour, Moacir Filho, and David Roubaud. 2018. Industry 4.0 and the Circular Economy: A Proposed Research Agenda and Original Roadmap for Sustainable Operations. Annals of Operations Research 270 (1–2): 273–286. https://doi. org/10.1007/s10479-018-2772-8. Jørgensen, Sveinung, Lars Jacob, and Tynes Pedersen. 2018. Restart Sustainable Business Model Innovation in Palgrave Studies in Sustainable Business In Association with Future Earth. Springer International Publishing Kok, J.N., E.J.W. Joost, W.A. Kosters Boers, P. Van der Putten, and M. Poel. 2009. Artificial Intelligence: Definition, Trends, Techniques, and Cases, Encyclopedia of Life Support Systems. Kristensen, H.S., and A. Remmen. 2019. A Framework for Sustainable Value Proposition in Product-Service Systems. Journal of Cleaner Production, Elsevier Ltd 223: 25–35.
362
M. Ghoreishi et al.
Kristoffersen, Eivind, Fenna Blomsma, Patrick Mikalef, and Jingyue Li. 2020. The Smart Circular Economy: A Digital-Enabled Circular Strategies Framework for Manufacturing Companies. Journal of Business Research 120 (November): 241–261. https://doi.org/10.1016/j. jbusres.2020.07.044. Lacy, Peter, Jessica Long, and Wesley Spindler. 2020. The Circular Economy Handbook. Washington, DC. Springer. https://doi.org/10.1057/978-1-349-95968-6. Lee, DonHee. 2018. Strategies for Technology-Driven Service Encounters for Patient Experience Satisfaction in Hospitals. Technological Forecasting and Social Change 137 (December): 118–127. https://doi.org/10.1016/j.techfore.2018.06.050. Lewandowski, Mateusz. 2016. Designing the Business Models for Circular Economy—Towards the Conceptual Framework. Sustainability. 8(1). https://doi.org/10.3390/su8010043 Li, Ai Qiang, and Pauline Found. 2017. Towards Sustainability: PSS, Digital Technology and Value Co-Creation. Procedia CIRP 64 (January): 79–84. https://doi.org/10.1016/J. PROCIR.2017.05.002. Lieder, Michael, and Amir Rashid. 2016. Towards Circular Economy Implementation: A Comprehensive Review in Context of Manufacturing Industry. Journal of Cleaner Production 115: 36–51. https://doi.org/10.1016/j.jclepro.2015.12.042. Lin, Kuo Yi. 2018. User Experience-Based Product Design for Smart Production to Empower Industry 4.0 in the Glass Recycling Circular Economy. Computers and Industrial Engineering 125 (June): 729–738. https://doi.org/10.1016/j.cie.2018.06.023. Linder, Marcus, and Mats Williander. 2017. Circular Business Model Innovation: Inherent Uncertainties. Business Strategy and the Environment 26 (2): 182–196. https://doi.org/10.1002/ bse.1906. Manzini, E., and C. Vezzoli. 2003. A Strategic Design Approach to Develop Sustainable Product Service Systems: Examples Taken from the ‘Environmentally Friendly Innovation’ Italian Prize. Journal of Cleaner Production 11: 851–857. https://doi.org/10.1016/S0959-6526(02)00153-1. Melander, Lisa, and Ala Pazirandeh. 2019. Collaboration Beyond the Supply Network for Green Innovation: Insight from 11 Cases. Supply Chain Management: An International Journal, SCM-08-2018-0285. https://doi.org/10.1108/SCM-08-2018-0285. Mishra, Jyoti L., Peter G. Hopkinson, and Gin Tidridge. 2018. Value Creation from Circular Economy-Led Closed Loop Supply Chains: A Case Study of Fast-Moving Consumer Goods. Production Planning and Control 29 (6): 509–521. https://doi.org/10.1080/09537287.201 8.1449245. Mont, Oksana, Carl Dalhammar, and Nicholas Jacobsson. 2006. A New Business Model for Baby Prams Based on Leasing and Product Remanufacturing. Journal of Cleaner Production 14 (17): 1509–1518. https://doi.org/10.1016/j.jclepro.2006.01.024. Mühlroth, Christian, and Michael Grottke. 2020. Artificial Intelligence in Innovation: How to Spot Emerging Trends and Technologies. IEEE Transactions on Engineering Management 69(2): 1–18. https://doi.org/10.1109/TEM.2020.2989214. Nascimento, D.L.M., V. Alencastro, O.L.G. Quelhas, R.G.G. Caiado, J.A. Garza-Reyes, L.R. Lona, and G. Tortorella. 2019. Exploring Industry 4.0 Technologies to Enable Circular Economy Practices in a Manufacturing Context: A Business Model Proposal. Journal of Manufacturing Technology Management 30 (3): 607–627. https://doi.org/10.1108/JMTM-03-2018-0071. Östlin, Johan, Erik Sundin, and Mats Björkman. 2009. Product Life-Cycle Implications for Remanufacturing Strategies. Journal of Cleaner Production 17 (11): 999–1009. https://doi. org/10.1016/J.JCLEPRO.2009.02.021. Ramadoss, Tamil Selvan, Hilaal Alam, and Prof Ramakrishna Seeram. 2018. Artificial Intelligence and Internet of Things Enabled Circular Economy. The International Journal of Engineering and Science (IJES) 7: 55–63. https://doi.org/10.9790/1813-0709035563. Ramani, Karthik, Devarajan Ramanujan, William Z. Bernstein, Fu Zhao, John Sutherland, Carol Handwerker, Jun Ki Choi, Harrison Kim, and Deborah Thurston. 2010. Integrated Sustainable Life Cycle Design: A Review. Journal of Mechanical Design, Transaction of the ASME 132 (9): 0910041–0910415. https://doi.org/10.1115/1.4002308/476420.
The Impact of Artificial Intelligence on Circular Value Creation for Sustainable…
363
Reim, Wiebke, Vinit Parida, and Daniel Örtqvist. 2015. Product-Service Systems (PSS) Business Models and Tactics – A Systematic Literature Review. Journal of Cleaner Production 97 (July 2014): 61–75. https://doi.org/10.1016/j.jclepro.2014.07.003. Schenkel, Maren, Marjolein C.J. Caniëls, Harold Krikke, and Erwin Van Der Laan. 2015. Understanding Value Creation in Closed Loop Supply Chains – Past Findings and Future Directions. Journal of Manufacturing Systems 37: 729–745. https://doi.org/10.1016/j. jmsy.2015.04.009. Schroeder, Patrick, Kartika Anggraeni, and Uwe Weber. 2019. The Relevance of Circular Economy Practices to the Sustainable Development Goals. Journal of Industrial Ecology 23 (1): 77–95. https://doi.org/10.1111/jiec.12732. Smarter Sorting. 2021. Smarter Sorting. https://www.smartersorting.com/?hsLang=en. Sitra. 2021. The Winning Recipe for a Circular Economy-What Can Inspiring Examples Show Us? www.sitra.fi. Smith, Mark, and Jo Crotty. 2008. Environmental Regulation and Innovation Driving Ecological Design in the UK Automotive Industry. Business Strategy and the Environment 17 (6): 341–349. https://doi.org/10.1002/BSE.550. Storbacka, Kaj, Roderick J. Brodie, Tilo Böhmann, Paul P. Maglio, and Suvi Nenonen. 2016. Actor Engagement as a Microfoundation for Value Co-Creation. Journal of Business Research 69 (8): 3008–3017. https://doi.org/10.1016/j.jbusres.2016.02.034. Sundin, Erik, and Bert Bras. 2005. Making Functional Sales Environmentally and Economically Beneficial Through Product Remanufacturing. Journal of Cleaner Production 13 (9): 913–925. https://doi.org/10.1016/J.JCLEPRO.2004.04.006. Townsend, David M, and Richard A Hunt. 2019. Entrepreneurial Action, Creativity, & Judgment in the Age of Artificial Intelligence. https://doi.org/10.1016/j.jbvi.2019.e00126. Tukker, Arnold. 2015. Product Services for a Resource-Efficient and Circular Economy – A Review. Journal of Cleaner Production 97: 76–91. https://doi.org/10.1016/j. jclepro.2013.11.049. Ünal, Enes, Andrea Urbinati, and Davide Chiaroni. 2019. Managerial Practices for Designing Circular Economy Business Models: The Case of an Italian SME in the Office Supply Industry. Journal of Manufacturing Technology Management 30 (3): 561–589. https://doi.org/10.1108/ JMTM-02-2018-0061. United Nations. 2015. Transforming Our World: The 2030 Agenda for Sustainable Development. https://sustainabledevelopment.un.org/content/documents/21252030Agenda for Sustainable Development web.pdf. Urbinati, Andrea, David Chiaroni, and Vittorio Chiesa. 2017. Towards a New Taxonomy of Circular Economy Business Models. Journal of Cleaner Production 168: 487–498. Van den Berg, M.R., and C.A. Bakker. 2015. A Product Design Framework for a Circular Economy. PLATE (Product Lifetimes and the Environment) Conference Proceedings, no. June: 365–379. https://www.researchgate.net/profile/Giuseppe_Salvia/publication/303476076_Product_Lifetimes_And_The_Environment_Conference_Proceedings/ links/57447ba808aea45ee85306ca.pdf#page=373. Vargo, Stephen L., and Robert F. Lusch. 2004. Evolving to a New Dominant Logic for Marketing. Journal of Marketing 68 (1): 1–17. https://doi.org/10.1509/JMKG.68.1.1.24036. Waheed, M.F., and A.M. Khalid. 2019. Impact of Emerging Technologies for Sustainable Fashion, Textile and Design. Advances in Intelligent Systems and Computing 903. https://doi. org/10.1007/978-3-030-11051-2_104. Whicher, Anna, Christopher Harris, Katie Beverley, and Piotr Swiatek. 2018. Design for Circular Economy: Developing an Action Plan for Scotland. Journal of Cleaner Production 172 (December 2015): 3237–3248. 10.1016/j.jclepro.2017.11.009. Yoo, Youngjin, Ola Henfridsson, and Kalle Lyytinen. 2010. The New Organizing Logic of Digital Innovation: An Agenda for Information Systems Research. Information Systems Research 21 (4): 724–735. https://doi.org/10.1287/isre.1100.0322.
Computer-Aided Corporate Sense-Making and Prioritization for SDGs Innar Liiv, Erkki Karo, and Ralf-Martin Soe
Abstract It has become a recurring necessity and exercise for corporations to assess the alignment of their corporate strategy and goals with UN Sustainable Development Goals (SDGs). Such an assessment is a highly complex task, full of inconsistencies and subjective opinions of internal and external stakeholders, which eventually influences the formal processes of strategy making and strategic choices. This chapter presents a computer-aided method for corporate sense-making and prioritization of SDGs, beyond the current state of the art SDG assessment tools and methods. Novel technology and data analytics can be used for supporting the assessment process and finding a consensus between different opinions. We present a customized version of Thomas Saaty’s Analytical Hierarchy Process, which is custom tailored for SDG assessment to structure and organize the decision process and find and eliminate inconsistencies of group decision-making. We present and summarize the experiences and lessons learned from eight computer-aided corporate SDG sense-making and prioritization exercises carried out in Estonia and Finland. Keywords SDGs · Decision science · Impact assessment · Analytical hierarchy process
I. Liiv (*) School of Information Technology, Tallinn University of Technology, Tallinn, Estonia e-mail: [email protected] E. Karo Ragnar Nurkse Department of Innovation and Governance, Tallinn University of Technology, Tallinn, Estonia e-mail: [email protected] R.-M. Soe FinEst Centre for Smart Cities, Tallinn University of Technology, Tallinn, Estonia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_20
365
366
I. Liiv et al.
1 Introduction The SDGs (UN 2015) have effectively become a prominent and strategic umbrella framework for multilateral organizations such as the UN and EU, and they have also been increasingly adopted by the public sector organizations on the national and local levels. Furthermore, there tends to be a cumulative acceptance among the research community – the usage of the keyword “SDG” has increased ninefold in 2020 vs. 2015 in both Web of Science and Scopus. The SDGs also are commonly used within the third sector, especially in the context of climate change. Therefore, the global strategic orientation toward SDGs has become widespread in government, academia, and civil society. However, the fourth Q in the quadruple helix model, the industry, has been, for a long time, more conservative, or sidelined, in adapting to novel SDG-driven business models. This has been especially the case of investment-heavy and overregulated sectors such as energy and banking, although both sectors have been quickly catching up. Therefore, this chapter is mainly interested in how SDGs are reasoned and accepted within the corporate sector and how technology and algorithms can support in the corporate sense-making and prioritization process for SDGs. Algorithms, in this case, do not act or recommend independently but support, empower, and amplify cognitive processes of participants. Our approach for using algorithms for achieving SDGs is not operating on a typical macro or supermacro level but on a micro level ready to be used directly by corporations. The main contribution of this chapter is to present a complex set of decision science methods to help the impact assessment and prioritization process for SDGs, consisting of a customized version of Thomas Saaty’s Analytical Hierarchy Process (Wind and Saaty 1980; Saaty 1988, 2008), automatic consistency measurement of answers, Kemeny-Snell distance measurement between corporate strategy and chosen initiatives, and its visualization with multidimensional scaling. In addition to presenting the proposed methodology, we validate and summarize the experiences and lessons learned from eight computer-aided corporate SDG sense-making and prioritization exercises carried out in Estonia and Finland. In addition, after the carrying out of the exercise, we asked participants to reflect upon the experiences and the potential value of the tool for corporate strategic planning and management.
2 Motivation Corporations in general and SMEs in particular are considered as dynamic agents of modern economies: most new ideas are tested and most new jobs are created in SMEs. Hence, SMEs can also play a crucial role as dynamic change agents in the
Computer-Aided Corporate Sense-Making and Prioritization for SDGs
367
process of achieving Sustainable Development Goals, serving as a test bed for new ideas and launchpad for new industries and specializations. From a strategic management perspective, this requires a more systematic understanding of how specific companies can contribute to SDGs and aligning their business practices and strategies with SDGs. For many younger and smaller companies, making sense and navigating the complex and often bureaucratic landscape of SDGs may be a significant challenge. Yet, it has been also established that the new generations (generation Z and beyond) entering the labor force are requiring potential employers to provide a bigger and societally relevant mission or purpose for the organization (Mawhinney and Betts 2020). It has been estimated that SDGs are a 12 trillion USD market opportunity (UN 2019) and indeed most policy initiatives (e.g., European Green Deal (EC 2019a)) predominantly focus on “crowding-in” private sector investments for tackling some of the biggest societal challenges. In the context of the EU, the new EU taxonomy or sustainable activities (EC 2019b) are a prime example of such initiative and attempt to use financial and banking regulations to speed up these processes of crowding in. Such combined effect of policy initiatives combined with financial instruments is likely to both create new market opportunities and steer both large and small firms toward common direction (Mazzucato 2016). Most prioritization in organizations traditionally depends on some forms of authority, i.e., priorities and directions are set by owners of firms, by the managers who represent classic rational merit-based authority, or in rare cases by the charismatic leaders within and outside organizations that provide new paths and dynamics for development (Weber 1978). We argue that the use of the computer-aided models allows, especially in organizations with a significant variety of staff and strategic development capacity, for a parallel and less power-based prioritization process outside the traditional corporate strategic and decision-making routines. The processes can be structured and moderated outside these power dynamics, and these can enable (assuming that the models are neutral enough, which is almost never the case) much stronger bottom-up co-creation and co-discovery of priorities and directions that make sense for the entirety of the organization (as opposed to the narrow lenses of the power/authority holders). Timo Honkela in his recent book (Honkela 2017) presented an interesting idea that artificial intelligence can help tackle and minimize not just misinformation but miscommunication as well. Although computer scientists are often reluctant to call any computer-aided or algorithmic automation as “artificial intelligence,” it is worthwhile to consider the search of potential usage of “artificial intelligence (AI) to support and advance the United Nations Sustainable Development Goals (SDGs)” (Oxford Initiative on AIxSDGs 2020) from the aspect of mitigating miscommunication as well. This chapter argues that algorithms can support the process of aligning the corporate strategy and different initiatives to SDGs and prioritizing them. Therefore, there is strong motivation to develop computer-aided methods and tools for corporate sense-making and prioritization for SDGs.
368
I. Liiv et al.
The United Nations Environment Programme Finance Initiative considers the first step toward responsible banking to be the alignment of the “business strategy to be consistent with and contribute to individuals’ needs and society’s goals, as expressed in the Sustainable Development Goals” (UNEP 2019). The principles for responsible banking are focused on a specific sector, but clearly such an alignment is an essential step in any business sector. However, the market lacks tools to support the process of alignment and prioritization. A notable exception in this market is the SDG Impact Assessment Tool by the Gothenburg Center (see Fig. 1), which supports and approaches the impact assessment in three steps: (a) sorting the SDGs according to the relevance (relevant, not relevant, I don’t know; latter choice empowering and emphasizing the learning aspect of impact assessment tools) (b) assessing the kind of impact of each SDG (direct positive, indirect positive, no impact, indirect negative, direct negative) (c) after the assessment of each relevant SDG, reflecting on the strategic choices for prioritizing actions ahead. However, finding just the list of relevant SDGs is not enough for resource allocation planning, especially in the context of budget restrictions. Therefore, the prioritization of SDGs is important as well as finding numeric proportions to respective priorities. Developing a methodology to fulfill exactly those requirements validating such an approach with actual corporations is the focus and main contribution of this chapter. Novel technology and data analytics can be used for supporting the assessment process and finding a consensus between different opinions.
Fig. 1 SDG impact assessment tool by the Gothenburg Center
Computer-Aided Corporate Sense-Making and Prioritization for SDGs
369
3 Proposed Methodology The Analytic Hierarchy Process (AHP) is a complex decision analysis methodology for structuring, measurement, and synthesis (Forman and Gass 2001). It was originally developed by Thomas L. Saaty as “a way to determine which objective outweighs another, both in near and long terms” (Saaty 1994). The core of the methodology is to elicit judgments that reflect knowledge, feelings, or emotions and to represent those with meaningful and comparable numbers to calculate the priorities of the elements (Saaty 1994). A typical hierarchy for AHP has three levels (Saaty 1988, 2000, 2001, 2008): a goal, alternative solutions, and criteria for evaluating those alternatives. In our context, we will use a two-level hierarchy: corporate strategy and (selected subset of) Sustainable Development Goals. In addition to the impact assessment of the corporate strategy, we will use additional independent two-level hierarchies for the impact assessment of individual initiatives and projects that are considered useful or necessary by the company itself on the path toward the overall mission or purpose. In those eight computer-aided corporate SDG sensemaking and prioritization exercises carried out in Estonia and Finland, the setup of corporate strategy plus two initiatives was used. In theory, more initiatives could be analyzed, but the rationale was to minimize respondent burden and response fatigue (Sinickas 2007). The main goal of those experimental impact assessments was to validate the methodology and get feedback about future enhancements for the approach. A step- by-step description of the methodology, together with example data, will be presented in this section, without analyzing any one specific participant of the study. In line with the first step of AHP general surveying procedure, we asked participants to conduct pairwise comparisons between SDGs according to their importance for their corporate strategy and two initiatives based on their preference and relevance for the context of the exercise. This introduced the first challenge, since for 17 Sustainable Development Goals, there would be (17 · 16)/2 = 136 different pairwise comparisons. It is not realistic to ask a senior leader in the company to give such a number of individual judgments and opinions. Based on Miller’s classical theory on the limits on people’s capacity for processing information (Miller 1956), we were looking for a subset of 7+-2 SDGs to be considered more thoroughly. However, the exact number and the way of selecting a smaller number of relevant SDGs were left for the company to decide. Regardless of the final approach chosen by the company for reducing the number of (more) relevant Sustainable Development Goals to be prioritized, finding such a subset was the (1) step as shown on Fig. 2. In this example, five SDGs (5, 8, 9, 13, 16) were chosen out of 17. The (2) step was to conduct pairwise comparisons between SDGs according to their importance (see Fig. 3): first, for their corporate strategy and, second, for two initiatives based on their preference and relevance for the context of the exercise. In the case of assessing the corporate strategy, the following question was posed: Which of those two SDGs will the corporate strategy contribute more or have more
370
I. Liiv et al.
Fig. 2 First step was to reduce the number of SDGs to be prioritized
impact? All pairwise assessments are to be given using the Saaty rating scale presented in Table 1. The data, recording the judgments that reflect respondents’ knowledge, feelings, or emotions about the object currently assessed, can be structured as presented in Table 2. In case of assessing the corporate strategy and two initiatives, there will be three independent data tables like that. In case the judgments are not given as a group choice, similar data tables can be stored for multiple respondents to further analyze the difference of preferences and priorities. After the pairwise comparison and data collection, the data processing, visualization, and analysis phase started. The preferences in Table 2 are converted to a matrix format (Table 3), compatible for calculations for any software package, which implements the Analytical Hierarchy Process (e.g., R software (Cho 2019), Excel, or other spreadsheet software (Goepel 2018)). The rightmost column in Table 3 indicates the numeric priority of the specific SDG, based on pairwise judgments. Several features of the Analytic Hierarchy Process can have a meaningful interpretation in this use case. For example, it is possible to calculate a consistency ratio of the judgments (11.1% in the case of this example), which indicates whether pairwise judgments are consistent with each other and even can give feedback on a specific judgment which might have been assessed/entered incorrectly or already while having a response fatigue. It has been discussed over the years (Wind and
Computer-Aided Corporate Sense-Making and Prioritization for SDGs
371
Fig. 3 Pairwise comparisons of five SDGs
Saaty 1980; Saaty 1988, 2008) that even if the indicative consistency threshold to be considered satisfactory is 10%, it very much depends on the specific domain and is occasionally (pragmatically) considered to be reasonably consistent with the consistency ratio 20%. The results of prioritization and ranking of SDGs can be visualized as a simple visualization of the results of prioritization and ranking of SDGs (see Fig. 4) that makes the motivation of numeric measurements of priorities and the resulting ranking evident. Instead of just listing a number of SDGs in corporate documents, it now enables group discussions on a more structural basis and allows the measurement of
372
I. Liiv et al.
Table 1 Saaty rating scale for pairwise comparisons Rating scale 1 3 5 7 9 2,4,6,8
Definition Importance/contribution to both goals is equal Experience and judgment slightly considers the importance/contribution to one goal to be more relevant Experience and judgment strongly considers the importance/contribution to one goal to be more relevant Importance/contribution favors very strongly one goal over another (clear demonstration of dominance) The evidence favoring one goal over another is of the highest possible order or certainty Intermediate values between two adjacent judgments
Table 2 The result of pairwise comparisons Choice 1 SDG5 SDG5 SDG5 SDG5 SDG8 SDG8 SDG8 SDG9 SDG9 SDG13
Choice 2 SDG8 SDG9 SDG13 SDG16 SDG9 SDG13 SDG16 SDG13 SDG16 SDG16
More impact? SDG8 SDG9 SDG13 SDG5 SDG9 SDG8 SDG8 SDG9 SDG9 SDG13
Scale 5 5 3 3 3 5 5 5 5 3
expected and actual budget resource allocations (e.g., are the proportions correct, and if not, is it possible to backtrack to specific pairwise judgment of preference/ importance?). Since we are dealing with prioritizations, preferences, and rankings, in order to analyze and visualize the similarity between either the corporate strategy and initiatives or consensus among respondents, instead of classical data science similarity measures (e.g., Euclidean distance, Hamming distance), Kemeny-Snell distance can be used to compute the distance between two rankings (Kemeny and Snell 1962; Luo et al. 2002). If there are many objects analyzed or multiple respondents, Kemeny-Snell distance can give an analyst additional insights about how far each prioritization is from every other recorded prioritization and even to calculate a median ranking or a consensus ranking which ought to summarize or aggregate all other rankings into one being mathematically most similar to all other opinions. An example of visualizing the Kemeny-Snell distance between the corporate strategy and two initiatives using multidimensional scaling (Torgerson 1952; Kruskal 1978; Cox and Cox 2008) is presented on Fig. 5. It is possible to see from this example which initiative is
Computer-Aided Corporate Sense-Making and Prioritization for SDGs
373
Table 3 Results of pairwise comparisons in a matrix format and SDG priorities as a result of AHP calculation
SDG5 SDG8 SDG9 SDG13 SDG16
SDG5 1 5 5 3 1/3
SDG8 1/5 1 3 1/5 1/5
SDG9 1/5 1/3 1 1/5 1/5
SDG13 1/3 5 5 1 1/3
SDG16 3 5 5 3 1
Normalized principal Eigenvector 7.44% 29.87% 46.35% 11.55% 4.80%
Fig. 4 Prioritization and ranking of SDGs
closer to the corporate strategy and plot all prioritization on one figure to better understand patterns in individual priorities and judgments.
4 Lessons Learned The proposed methodology was tested with eight corporate SDG sense-making and prioritization projects in order to validate and learn from the process. The focus of this chapter is not to present specific SDGs those companies prioritized but to thematically summarize main lessons learned from the process. Recurring themes and challenges for future methodological enhancement are grouped into following subtopics.
374
I. Liiv et al.
Fig. 5 Similarity of priorities between the corporate strategy and two initiatives
4.1 Minimizing Respondent Burden As discussed in the previous section, using the proposed methodology on all 17 SDGs would be unrealistic because it would require conducting 136 different pairwise comparisons to cover all of SDGs and 408 pairwise comparisons to assess the corporate strategy and, additionally, two initiatives. The exact number and the way of selecting a smaller number of relevant SDGs were left for the company to decide. Solutions typically belonged to either of the following three heuristics: (a) to organize an additional poll within the organization to identify (rank) relevant SDGs, (b) to choose the ones already identified earlier in strategy documents as a seed list and potentially add a few based on discussions, and (c) to identify a relevant subset on-site during the interview. However, this could potentially be another place where algorithms could be used to enhance the methodology to support the process even further. The hypothesis would be that if algorithms are able to detect inconsistencies in the judgments using AHP, there must be some redundancy, which could be leveraged and optimized by algorithms. It could be an interesting avenue of research for the future to explore the conceptualization and algorithmic operationalization of saturation (Saunders et al. 2018) for AHP pairwise judgments and how it could be implemented in practice. According to human intuition during the prioritization exercises, the “underlying opinion” behind pairwise comparisons became evident already before half of the pairwise comparisons were completed. If algorithms could help to optimize and predict the minimum necessary number of pairwise comparisons to be done, it may be possible to eliminate the preliminary step to reduce the number of SDGs for only a subset of most relevant, without (proportionally) increasing the respondent burden.
Computer-Aided Corporate Sense-Making and Prioritization for SDGs
375
4.2 Assessments About the Present or the Future? Whether the impact should be about the present (AS-IS) or the future (TO-BE) was a recurring emerging discussion with most companies. This situation was not because of the lack of clarity in the survey design but is a deeper challenge and discussion for strategic planning in general. The focus on climate change, climate neutrality, and similar grant societal challenges has brought about a crucial change in modern strategic planning: the establishment and articulation of long-term plans (i.e., “net zero by 2050” or similar). This shift away from focusing only on short-term goals (quarterly and annual KPIs) has severe implications on adequate models for both public and private sector strategic planning. While in the initial stage, most organizations are likely to try to match current activities and processes with the tasks needed to achieve these long- term ambitions (we can call this incremental SDG strategy), over time, it is likely that most organization will realize that achieving such large scale socio-economic or even firm level ambitions required much more transformative and scenario-based approaches to aligning core organization processes and larger ambitions (we can call this transformative SDG strategy).
4.3 Why Are the Company Strategy and Initiative Impact Assessments (So) Different? Another recurring theme over most impact assessments was the interplay between corporate strategy impact assessment versus initiatives’ impact assessments. Initiatives support the general corporate strategy, but it is not unusual for specific corporate initiatives to not to have a matching prioritization of SDGs to the corporate strategy. Similarly, for initiatives several additional SDGs can be relevant and some from the corporate strategy not relevant at all. Given the long-term view necessary for aligning corporate strategies and actions with SDGs and other grand societal challenges and movements (e.g., climate neutrality), one can expect that the overall strategic planning and thinking become more mission-oriented or purpose-driven based on agreeing to the “big” ambitious goals while also allowing for much more uncertainty and agility in the actual daily actions toward these goals. This by necessity entails providing much more autonomy and freedom within organizations for defining and managing through individual initiatives and projects that are considered useful or necessary on the path toward the overall mission or purpose.
376
I. Liiv et al.
4.4 The Kind and Direction of the SDG Impact It cannot be assumed that corporate strategies and initiatives contribute to the SDGs in a linear or unidirectional way. Furthermore, the effect can also be negative. In retrospect, the research design choice proposed by the SDG Impact Assessment Tool by the Gothenburg Center to categorize the kind of the SDG impact into classes (e.g., direct positive, indirect positive, no impact, indirect negative, direct negative) is very relevant, since most of the companies in this study were struggling with whether to only consider positive or also negative impact. For example, if a solution or an initiative is applied in a city to assist in finding free parking spaces as smoothly as possible, this can increase the number of people interested in driving a car into the city center, instead of using public transport or nonmotorized traffic. Continuing with mobility, it can also be argued that automated vehicles can have negative effect on some SDGs, such as strengthening efforts to protect and safeguard the world’s cultural and natural heritage as fully automated transport requires reconstruction of urban environments. However, in most cases, this comes down to the implementation process, whereas different solutions and initiatives can have both negative and positive effects. For example, if fully automated urban transport is applied, dependent on the design and implementation, meeting some SDGs (e.g., by 2030, provide universal access to safe, inclusive and accessible, green and public spaces, in particular for women and children, older persons, and persons with disabilities) can range from positive effects (with smaller number of cars in cities and also parking lots, the access to green and public spaces is enhanced) to negative effects (reconstructing the cities could also limit this access).
4.5 The Ethics of Computer-Aided Minimization of Miscommunications The most prominent discussions in the context of ethics and algorithms are related to algorithmic bias, manipulating public opinion and making fair automatic decisions about the individuals. The methods presented in this chapter present additional and completely unique ethical challenges, previously unmet with algorithmic decision-making and recommendation systems. If the goal is to minimize miscommunication and alignment between the preferences and opinions of participants, trusting the algorithm not to manipulate or bias the opinion to one direction or the other is fundamental for acceptance of the tool. This, as well, could be an interesting avenue of research for the future to understand better how respondents feel about an algorithm highlighting an inconsistency in their judgment and asking to go back and alter a choice. Even if the participant understands how the method works, they tend to feel uncomfortable if an algorithm identifies an inconsistency in their judgments, preferences, and opinions.
Computer-Aided Corporate Sense-Making and Prioritization for SDGs
377
4.6 Role Conflicts of Respondents While, conceptually, the computer-aided models may help to deliver more bottomup priority setting and aligning of organizational goals, one should always keep in mind that the supportive interview method still includes its traditional limitations, i.e., bias and role conflicts of respondents (are they responding as experts, individuals, or representatives of the team/organization?) and their interest to “game” the methods and bring the power dynamics back in (i.e., responding based on the power-based organizational agenda). This makes it necessary to combine the methodological approaches of both data sciences and social sciences and compile joint protocols to mitigate each other’s methodological weaknesses.
5 Conclusion This chapter presented a technology-based, structured, and moderated tool for corporate sense-making and prioritization for SDGs. We presented and summarized the experiences and lessons learned from eight computer-aided corporate SDG sense- making and prioritization exercises carried out in Estonia and Finland. The experiences showed that the proposed process supports better SDG-related internal communication, sense-making, ideation, and finding new business opportunities and more efficient solutions for the goals seen as a priority by the company. In fact, the use of the computer-aided models allows for a parallel and less power- based prioritization process outside the traditional corporate strategic and decision- making routines. Hence, it allows deep analysis and discovery of different sets of SDG-related priorities and also analysis of alignment between formal strategic goals vs subsidiary project goals in the context of SDGs.
References Cho, Frankie. 2019. Analytic Hierarchy Process for Survey Data in R. Vignettes Ahpsurvey Package (ver 0.4. 0) 26. Cox, Michael A.A., and Trevor F. Cox. 2008. Multidimensional Scaling. In Handbook of Data Visualization, 315–347. Berlin: Springer. EC. 2019a. A European Green Deal: Striving to Be the First Climate-Neutral Continent. https:// ec.europa.eu/info/strategy/priorities-2019-2024/europeangreen-dealen. ———. 2019b. EU Taxonomy for Sustainable Activities: What the EU Is Doing to Create an EU-Wide Classification System for Sustainable Activities. https://ec.europa.eu/info/ business-economy-euro/banking-and-finance/sustain able-finance/eu-taxonomy-sustainable- activities en. Forman, Ernest H., and Saul I. Gass. 2001. The Analytic Hierarchy Process—An Exposition. Operations Research 49 (4): 469–486.
378
I. Liiv et al.
Goepel, Klaus D. 2018. AHP Excel Template with Multiple Inputs. Singapore: Business Performance Management Singapore (BPMSG). Honkela, Timo. 2017. Rauhankone: teko¨alytutkijan testamentti. Helsinki: Gaudeamus. Kemeny, John G., and L.J. Snell. 1962. Preference Ranking: An Axiomatic Approach. In Mathematical Models in the Social Sciences, 9–23. Cambridge: MIT Press. Kruskal, Joseph B. 1978. Multidimensional Scaling. Vol. 11. Beverly Hills: Sage. Luo, Jiebo, Stephen P. Etz, Robert T. Gray, and Amit Singhal. 2002. Normalized Kemeny and Snell Distance: A Novel Metric for Quantitative Evaluation of Rank-Order Similarity of Images. IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (8): 1147–1151. Mawhinney, T., and K. Betts. 2020. Understanding Generation Z in the Workplace. Deloitte Insights 24: 1–24. Mazzucato, Mariana. 2016. From Market Fixing to Market-Creating: A New Framework for Innovation Policy. Industry and Innovation 23 (2): 140–156. Miller, George A. 1956. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review 63 (2): 81. Oxford Initiative on AIxSDGs. 2020. https://www.aiforsdgs.org/. Saaty, Thomas L. 1988. What Is the Analytic Hierarchy Process? In Mathematical Models for Decision Support, 109–121. Berlin: Springer. ———. 1994. How to Make a Decision: The Analytic Hierarchy Process. Interfaces 24 (6): 19–43. ———. 2000. Fundamentals of Decision Making and Priority Theory with the Analytic Hierarchy Process. Vol. 6. Pittsburgh: RWS Publications. ———. 2001. Fundamentals of the Analytic Hierarchy Process. In The Analytic Hierarchy Process in Natural Resource and Environmental Decision Making, 15–35. Dordrecht: Springer. ———. 2008. Decision Making with the Analytic Hierarchy Process. International Journal of Services Sciences 1 (1): 83–98. Saunders, Benjamin, Julius Sim, Tom Kingstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. Saturation in Qualitative Research: Exploring Its Conceptualization and Operationalization. Quality & Quantity 52 (4): 1893–1907. Sinickas, Angela. 2007. Finding a Cure for Survey Fatigue. Strategic Communication Management 11 (2): 11. Torgerson, Warren S. 1952. Multidimensional Scaling: I. Theory and Method. Psychometrika 17 (4): 401–419. UN. 2015. Transforming Our World: The 2030 Agenda for Sustainable Development (Resolution Adopted by the General Assembly on 25 September 2015). https://sdgs.un.org/2030agenda. ———. 2019. UN Secretary-General’s Strategy for Financing the 2030 Agenda. https://www. un.org/sustainabledevelopment/sg-finance-strategy/. UNEP. 2019. United Nations Environment Programme – Finance Initiative. https://sdgs. un.org/2030agenda. Weber, Max. 1978. Economy and Society: An Outline of Interpretive Sociology. Vol. 1. Berkeley: University of California Press Wind, Yoram, and Thomas L. Saaty. 1980. Marketing Applications of the Analytic Hierarchy Process. Management Science 26 (7): 641–658.
Role of Artificial Intelligence in Advancing Sustainable Development Goals in the Agriculture Sector Soenke Ziesche, Swati Agarwal, Uday Nagaraju, Edson Prestes, and Naman Singha
Abstract Artificial intelligence (AI) is the algorithms designed to make decisions, often using big real-time data to perform activities that at times go beyond human capabilities. Given the increasing gap in agricultural demand and supply worldwide, further widened by the COVID-19 pandemic (The pandemic has derailed the progress towards Sustainable Development Goals (SDGs) further off the track. The SDG financing gap per annum widened from USD 2.5 trillion to around USD 4.2 trillion), it necessitates innovative and cost-effective approaches to agriculture. AI has begun producing innovative technological solutions and data-driven insights to farming which gives confidence that it can be used to mitigate challenges around sustainable agricultural practices and facilitate getting SDGs back on track. In agriculture, AI has demonstrated immense potential in achieving enhanced productivity This chapter submitted and contributed by AI Policy Labs, UK. S. Ziesche AI Policy Labs, London, UK Fellow, AI Policy Labs, Delhi, India S. Agarwal AI Policy Labs, London, UK Former Head of Research and Partnerships, AI Policy Labs, New Delhi, India U. Nagaraju (*) AI Policy Labs, London, UK Founder, AI Policy Labs, London, UK E. Prestes AI Policy Labs, London, UK Advisor, AI Policy Labs & Informatics Institute, Federal University of Rio Grande do Su, Porto Alegre, RS, Brazil N. Singha AI Policy Labs, London, UK Researcher, AI Policy Labs, Greater Noida, India © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_21
379
380
S. Ziesche et al.
and improving the existing supply chains, delivery systems and market value/better pricing in both developed and developing countries for better utilisation of the produce. Several innovative uses of AI in agriculture have emerged worldwide, promising to advance farm productivity while improving sustainability and livelihoods at the same time. However, many of these experiments/pilots exist in silos. Due to this fragmented approach, a comprehensive understanding of how successful the use of AI has been in agriculture and what shortcomings or challenges were faced in some of these technological implementations has not been well evaluated. This chapter, therefore, assesses the pressing reasons to use innovative and cost-effective digital interventions like AI for SDGs in the agriculture sector. The paper then identifies the challenges in designing a successful AI programme and explores the potential of multi-stakeholder partnerships in this context. Keywords Innovation · Food productivity · Sustainability · SDG 2 · Artificial intelligence · Agriculture
1 The Ever-Growing Hunger Over the past few decades, the agricultural sector has advanced considerably. Owing to the phenomenal success of the Green Revolution in increasing agricultural productivity, given the combined use of high yield variety seeds, higher irrigation and advanced machinery (agricultural production tripled between 1960 and 2015 (FAO 2017)), the world was able to avoid the grim Malthusian prediction.1 Though the success of the Green Revolution helped many nations avoid severe famines and widespread hunger, the transition of humanity to the twenty-first century brought to the fore the fallouts of the revolution, and it became clear that ‘business as usual’ was no longer a feasible approach. The extensive and high-handed use of inputs (chemical fertilisers, water, electricity, machines, etc.) not only depleted natural resources like forest, land and soil but also significantly destroyed biodiversity of the region, accelerating the prospects of natural disasters. This led to increased vulnerability of the agricultural sector with unreliable productivity and output. On the other hand, agricultural sector became a major contributor to climate change and global warming, as it emitted tonnes of greenhouse gases. The overreliance on cereals (mainly rice) led to micronutrient deficiencies amongst a large chunk of the population, and thus the consumer demand today is placing reliance on diversification with increasing demands In the 1950s it was predicted that a severe food shortage might occur in South Asia whereby population growth would exceed the rate of increase in food production, leading to catastrophic consequences. 1
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
381
for products like dairy, fruits, vegetables and plant and animal-based protein. As more and more people worldwide make the transition towards an urban life (by 2050, two-third of the population will live in cities, (FAO 2017)) it will only significantly accelerate the shift in this consumption pattern. The demand for organically and chemical-free produced food with adequate quality and standard checks will continue to gain pace. In addition, as the world population continues to grow at a rapid pace (projected to peak at 11 billion by 2100 (FAO 2017)) and hunger and malnutrition continues to remain a prominent challenge, particularly in regions like sub-Saharan Africa and South Asia, the supply-side will have to address the twin challenge of sufficient as well as sustainable production. In this regard, the adoption of a ‘holistic’ approach, which integrates sustainable, climate-resilient, environment-friendly and innovative agricultural practices, has become imperative and unavoidable. The recent breakout of the COVID-19 pandemic that engulfed the entire world in 2020 also lay bare one of the major issues associated with the current globalised food system, that is, the risks associated with long food chains and the potential of the spread of transboundary pests, virus and diseases. This underlines that, in addition to climate-smart and conservation-based agriculture practices, future food systems will also have to place considerable emphasis on traditional and local best practices.
2 Looking Towards AI to Solve Agricultural Problems To address the issues of food insecurity, agricultural productivity and higher yields for the coming future, the role of technological interventions have become indispensable. The multitude of interconnected challenges such as scarce and stressed resources (land, water, soil, etc.), fluctuating outputs and increasing demands, changing weather and rainfall patterns and environmental pollution on top of a burgeoning population have all necessitated innovative measures to facilitate adaptability in accordance with the changing ecological and agricultural landscape. ‘Smart agriculture’, that is, the integration of disruptive technologies like the Internet of Things (IoT), AI, robots, drones, etc., in agricultural production and management, promises to close the supply-demand gap and optimise the natural and human resources for maximum and quality output. As noted by Khandelwal (2019), ‘farming solutions which are AI-powered enable a farmer to do more with less, enhancing the quality, and simultaneously also ensuring a quick go-to-market strategy for crops’. To explain the process, the IoT connects devices (like actuators, sensors, drones, geographic information systems (GIS), etc.) via Internet communication services to a common platform to collect and transmit data about key field parameters like temperature, humidity, soil, etc. With the help of AI technologies, data retrieved from the fields are processed and worked upon to generate the relevant insights and
382
S. Ziesche et al.
guide the future decision-making process regarding crop needs, field efficiency, productivity and improving financial metrics of managing farms. A simple representation of the process is shown in the figure below.
Source: Veronica Rubio and Francisco Mas (2020)
Agriculture is one sector, which is riddled with natural uncertainties and risks and requires constant monitoring, control and manual labour to derive the best results. This is exactly the gap that digital farming promises to fill, as armed with the strength of sophisticated technology (sensors, IoT applications, big data, decision- making and data processing prowess of AI systems). Farmers can monitor their farms in real time, quickly and more efficiently, to provide the best conditions and inputs and, in turn, simultaneously maximise the conditions for a good harvest. Thus, where traditional farming decisions were taken based on subjective judgement and knowledge, the modern farming practices will allow the farmers to make objective decisions based on quantifiable data. In short, integrating digital technologies into the ‘farms of future’ will free up a major chunk of time that is currently spent in manual, laborious and repetitive work towards making strategic choices and decisions about how to optimise the resources at hand and produce better results. Rubio and Mas (2019) state that farms that are technology-driven are able to generate co-benefits in the form of increased production and reduction of costs with minimal effort. The major opportunity areas, where the assimilation of emerging technologies will impact the agriculture sector, includes promotion of intelligent crop planning through extension of knowledge and advisories regarding credit, inputs, suitable crops, etc.; smart farming through farm mechanisation, predictive analysis of suitable resources, nutrients needed and threats like pests, weeds and diseases that can potentially threaten yields and harvest; and farmgate to fork business solutions by enhancing market intelligence and addressing the quality, traceability and logistics issues (WEF 2021). As agriculture has become nonremunerative and more and more communities shift towards an urban life leaving behind the rural space, there is an expected
383
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
shortage of labour. The adoption of robotic technology in agriculture is thus indispensable, as robots will supplant humans to do manual work for longer hours and with more precision. While the agricultural sector is still at a nascent stage, when it comes to adopting AI solutions, data-driven agriculture combined with machine learning solutions, autonomous machines and farm robots is believed to be the future of precision and sustainable agriculture.
3 Overview of Data Sources That Are Being Collected Across the Agricultural Chain The AI revolution in agriculture has initiated a shift in the whole agricultural food chain from the fields to harvesting and even transportation. AI and other emerging technologies (including blockchain) are currently being used in all four main clusters of agriculture: preproduction, production, processing and distribution (Ben Ayed and Hanana 2021). The below table lists out the varied data that is currently being collected and processed by IoT along the agricultural chain and their areas of practical application. This table also highlights the potential SDG targets that might be impacted as a result of AI interventions in the various agricultural stages, thereby strengthening the case for AI to be designed more holistically. IoT and AI S. applications in no agriculture Preproduction stage (a) Plant structure and properties
SDG target
Data collected
Application area
Plant Phenotyping measures complex traits of plants like growth, tolerance, resistance, physiology, etc. in a particular temporal and spatial environment Soil sampling and mapping through remote-sensing satellites, drones, etc.
Determining suitable plants types for a particular environment
Target 2.3, 12.2, 12.4
Determine soil properties, texture, water-holding and absorption potential to minimise erosion, acidification and pollution Estimate water demand of crops to select appropriate irrigation method For greenhouse and vertical farming, detailed and accurate monitoring of these parameters is needed
Target 2.4, 1.4
(b)
Soil monitoring
(c)
Humidity monitoring Air and soil moisture measurement
(d)
Greenhouse gases and temperature monitoring
Measuring parameters like shed structure, ventilation system, humidity, light, pressure, temperature and CO2 in environment
Target 2.4, 6.4, 1.4 Target 2.4, 12.8
(continued)
384
S. no (e)
S. Ziesche et al. IoT and AI applications in agriculture Fertilisation application
Data collected Measuring soil and crop- specific nutrient needs
Production stage (f) Disease monitoring
(g)
(h)
Monitoring crop-foliar status by infrared light sensors to check against crop disease and pests spread Crop and plant Yield monitoring to anticipate growth monitoring the quantity and quality through multispectral (moisture content, grain flow, sensor, camera and colour, size, etc.) of harvest softwares Weather prediction Sunlight, rainfall, humidity and so on
Target 2.4, 1.5
Forecasting weather patterns important for crop growth Tracking machinery helps in eliminating unnecessary routes, alerting when farm machinery maintenance is due Health updates about livestock can prevent the spread of diseases
Target 2.4, 1.5
Collection of precise and unambiguous information about particular crops in terms of their shape, size and colour by sophisticated sensors Picture evidence to detect crop failures
Automated harvesting by robots to reduce labour pressure and costs
Target 2.4, 12.3
Measuring and monitoring food temperature, quality by wireless sensors
Ensure longer-shelf life during transportation and reduce food waste
Farm machinery tracking
Accelerometer sensors can detect variations in the movement of machinery like tractors, drones, etc.
(j)
Location tracking of animals
Monitoring location, health, regular activities and feeding schedule of cattle
Distribution (l) Food storage and supply
(m) Consumer analytics
(n)
Inventory management
SDG target Target 2.4, 14.1
Effective health assessment and management to control disease spread Forecasting harvest is essential for future decision-making by farmers
(i)
Processing (k) Harvest monitoring
Application area Achieve precision fertilisation and reduce excessive application
Target 2.3, 2.a, 12.2
Target 2.a, 12.A
Target 2.5, 1.4, 12. A
Improving crop insurance Target system 12.4
Predicting consumer demand, Completes the feedback preferences, behaviour loop helping farmers pattern, etc. grow according to the demand Tracking the supplies and Helps in improving delivery logistics of food supply
Sources: Ayaz et al. (2019) and Ben Ayed and Hanana (2021)
Target 12.3, 2.c, 12.A Target 12.A, 1.A, 1.B Target 2.c
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
385
According to the study compiled by Farooq et al. (2020), the percentage of the research articles published on IoT solutions in agriculture is mostly in the following application areas, respectively: irrigation monitoring and control, precision farming, soil monitoring, temperature monitoring, animal monitoring and tracking and so on. In order to evaluate the extent of possible impact AI development within the agriculture sector can have on promoting SDGs, we comprehensively evaluated the SDG targets and indicators linked to agriculture across its supply chain. While some of these have been covered in the existing practices discussed above, many other interlinkages between agriculture and other SDG targets have not been initiated yet. This gives us a glimpse of the extent of future possibilities to be explored with the intervention of AI in the agriculture sector. SDG #2: End hunger, achieve food security and improved nutrition and promote sustainable agriculture SDG targets Description Issues Addressed Indicators 2 indicators Hunger; food accessibility Target 2.1 End Hunger by 2030 and – Undernourishment ensure accessibility of safe, and utilisation; undernourishment nutritious and sufficient – Food insecurity food by all people, experience scale particularly the poor, infants and vulnerable Stunting (low height for age) 3 indicators Target 2.2 End all forms of malnutrition by 2030, and and wasting (low weight for – Stunting – Malnutrition height) in children below achieve international 5 years; anaemia in women; – Anaemia in women targets on stunting and wasting of children below malnutrition of future aged 15 to 49 generations 5 years by 2025. Also, address nutritional needs of adolescent girls, pregnant and lactating women 2 indicators Agricultural and labour Target 2.3 Double agricultural productivity; increasing the – Volume of productivity and incomes of small-scale producers by income of farmers production per 2030, particularly women, labour unit indigenous peoples, family – Average income of farmers, pastoralists and small-scale fishers through equal and producers secure access to land and productive resources and inputs, financial markets, etc. Agricultural production and 1 indicator Target 2.4 Ensure sustainable food productivity; adoption of production systems, and – Proportion of sustainable agricultural implement resilient agricultural area practices; ecosystem and agricultural practices that under productive environmental sustainability; help maintain ecosystem, and sustainable fighting climate change strengthen adaptation to agriculture climate change and other disasters and improve land and soil quality (continued)
386
S. Ziesche et al.
SDG #2: End hunger, achieve food security and improved nutrition and promote sustainable agriculture SDG targets Description Issues Addressed Indicators 2 indicators Target 2.5 Maintain genetic diversity Protecting local breeds and of seeds, cultivated plants promoting genetic diversity – No. of plant and and domesticated animals of plants and animals; animal genetic and their species by 2020. intellectual property rights in resources secured in agriculture Maintaining diversified conservation seed and plant banks at facilities national, regional, – Local breeds international levels and classified at risk of promoting access to fair extinction and equitable sharing of benefits derived from utilisation of genetic resources and traditional knowledge Research and development 3 indicators Target 2.a Increasing investment in rural infrastructure, in agriculture; capacity- – Agriculture agricultural research and building of farmers and other orientation index for extension, technological stakeholders via knowledge govt. expenditures development and services – Total official flows maintaining plant and (official livestock gene banks to development promote agriculture’s assistance + other productive capacity in flows) to agriculture developing countries and least developed countries 1 indicator Target 2.b Address and prevent trade Eliminating trade barriers restrictions and distortions – Agricultural export in world markets, subsidies elimination of all agricultural export subsidies and measures in accordance with mandate of Doha Development Agenda 2 indicators Target 2.c Adopt measures to ensure Limiting volatility and proper functioning of food anomalies in food pricing – Indicator of food commodity markets and price anomalies their derivatives, facilitate timely access to market information to limited food price volatility SDG #5: Achieve gender equality and empower all women and girls 2 indicators Recognising ownership Target 5.a Undertaking reforms to give women equal rights to rights of women on – Proportion of total economic resources, access agricultural lands so that agricultural they can access inputs, to ownership and control population by sex resources and other skills over land and other – Proportion of property, financial services, needed to improve outputs countries where and yields etc. in accordance with legal framework national laws guarantees women equal rights to land ownership (continued)
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
387
SDG #2: End hunger, achieve food security and improved nutrition and promote sustainable agriculture SDG targets Description Issues Addressed Indicators SDG #6: Ensuring availability and sustainable management of water and sanitation for all Target 6.4 Substantially increase About 70% of freshwater 2 indicators water-use efficiency across around the world is used in – Change in all sectors and ensure agriculture (Khokar 2017). water-use efficiency sustainable withdrawals Promoting sustainable use of over time and supply of freshwater to water in this sector is – Level of water address water scarcity imperative to meet growing stress: freshwater demands withdrawal/ available freshwater resources SDG #7: Ensure access to affordable, reliable, sustainable and modern energy for all Target 7.2 By 2030, increase Agriculture is the second 1 indicator substantially the share of largest supplier of biofuels – Renewable energy renewable energy in the after forests. In 2018, share in total final global energy mix bioenergy held third place as energy consumption a source of renewable electricity generation (WBA 2020) SDG #8: Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all Target 8.3 Promote development- Growth in agriculture and 1 indicator oriented policies that allied sectors is directly – Proportion of support productive linked to job creation and informal activities, decent job better incomes for rural employment in total creation, entrepreneurship households. Currently employment, by and encourage agriculture is the second sector and sex formalisation of micro, largest employer after the small and medium services with agriculture enterprises currently accounting for 28% of global employment (World Bank n.d.) SDG #12: Ensure sustainable consumption and production patterns Target 12.3 By 2030, halve per capita Around 17% of total global 2 indicators global food waste at retail production of food may have – Food Loss Index and consumer levels and been wasted in 2019, acc. to – Food Waste Index reduce food losses along the UNEP report (2021) production and supply chains, including postharvest losses SDG #13: Take urgent action to combat climate change and its impacts Target 13.2. Integrate climate change Agricultural activities 2 indicators measures into national contribute to approx. 30% of – Number of policies and planning global greenhouse gas countries with emissions (IAEA n.d.) nationally determined contributions as reported to UNFCCC – Total greenhouse gas emissions/year
388
S. Ziesche et al.
SDG #2: End hunger, achieve food security and improved nutrition and promote sustainable agriculture SDG targets Description Issues Addressed Indicators SDG #14: Conserve and sustainably use the oceans, seas and marine resources for sustainable development, sustainably manage forests, combat desertification and halt and reverse land degradation and biodiversity loss Target 14.1 Prevent and reduce marine Runoff of chemical 2 indicators pollution of all kinds fertilisers from farms into – Index of coastal particularly from water bodies eutrophication land-based activities, – Plastic debris including marine debris density and nutrient pollution SDG #15: Protect, restore and promote sustainable use of terrestrial ecosystems Target 15.3 Combat desertification, Efficient and sustainable 1 indicator restore degraded land and farming key to prevent – Proportion of land soil, including land desertification of land that is degraded affected by desertification, over total land area droughts and floods
The assessment highlights that while SDG 2 has a direct bearing on the agriculture sector by promoting zero hunger, other SDG goals would be equally impacted by the AI interventions. Placing a higher responsibility on careful design and development of AI programmes, keeping in view these interconnections across sectors and actors, would help accelerate multi-faceted targets.
4 Leveraging AI for Advanced Agricultural Outputs 4.1 Selection of High Resistance Variety of Crops The cultivation stage of the crops is an extremely labour-intensive process. Several tasks are currently being performed manually by the farmers, such as weeding, de- leafing, pesticide spraying, fertilising and so on. Increasingly, with the given shortage of labour on farms, handling these manual tasks has become extremely inefficient at times leading to poor outcomes. New technological developments in the field of AI and robotics have been contributing towards addressing some of these challenges. For the cultivation stage, the deployment of hyperspectral vision systems with statistical machine learning has paved the way for a new field called digital phenotyping.2 Within this technique, sensors can identify plants based on the electromagnetic signatures they create. This technology helps accelerate breeding programmes as they are able to identify plants with disease and/or climate-resilient genes.
https://transmitter.ieee.org/feeding-the-world-with-intelligent-agriculture-solutions/
2
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
389
4.2 Working and Learning with Advanced Data AI applications have made the design of autonomous vehicles possible that can drive around the farmland while performing pruning and simultaneously collecting relevant data for digital evaluations of plant growth and its factors. For example, physical and climatic data of the cropland and the produce is used in several instances to indicate factors contributing to effective plant growth. For instance, AI has been able to evaluate and assess which pesticide works best; irrigation cycles provide higher yield, etc. In Colombia, a platform called as eKakashi has been used to evaluate several farm factors to advise farmers on indicators to advance crop yield. The model sends suggestions to farmers on estimated fertiliser usage, the need for an increase in irrigation and if there is a requirement for more labour. Going forward, not only can this data be visualised on a computer screen; the autonomous system can direct components to take relevant action. Smart systems are now increasingly prevalent in the farmlands, and through them, we can expect to achieve higher food security and sustainability and ultimately zero hunger by 2030.
4.3 Precision Agriculture Several AI-based technologies have been piloted in different regions of the world, which use a complex camera system to target and spray weeds. Because of these, AI systems/robots are expected to use up to 90% less farm inputs, including herbicides, water for irrigation, fertilisers, etc., making it cheaper than traditional treatments.3 It has been observed that in the past few years, there has been an enhanced interest in unmanned aerial vehicles (UAVs) applications towards surveillance of farms, recognition and detection of pests, diseases and weeds and human body detection. The deployment of exceptional imaging technology involves delivery, photography and detection to assist the farmer to detect issues and identify solutions efficiently.
4.4 Augmenting Labour Force and Skills AI enables the farmer to gather vast amounts of data from government and public websites and examine them. This will help equip the farmers to tackle various issues and foster an intelligent method of farming, which will assist towards higher crop production.
https://interestingengineering.com/9-robots-that-are-invading-the-agriculture-industry
3
390
S. Ziesche et al.
4.5 Maximising Returns The emerging technologies help the smallholder farmer to select the optimum crops and hybrid seed preferences. AI identifies the various weather conditions and the varying soil types for best seed selection. This enables the farmer to achieve the annual outcomes, end users’ needs and market trends towards an efficient maximisation of the crop return.
4.6 Chatbots for Farmers AI-based chatbots in association with machine learning techniques help the stakeholder to receive solutions to their unanswered questions. For instance, chatbots help the farmers receive advice and recommendations from experts (Talviya et al. 2020).
4.7 Intelligent Crop Planning The adoption of emerging technological models to address the various climate change issues impacts the agricultural sector. AI crop planning models create a design towards enhanced crop productivity, consumer needs, market intelligence, and infrastructure for a broad, all-inclusive, market-oriented and upgradeable plan.
4.8 Postharvest Value Chain Operations This aspect helps tackle the various challenges in post harvest value chain operations and puts forward technological solutions. This will help improve farmers’ incomes and boost returns for supply chain actors in the agriculture ecosystem. The technical solutions are spread across six key areas: quality assessment, trackability, strategic organisation and warehousing, financial services, buyer-supplier compatibility and market-risk management.
5 Can AI Impede Achievement of SDGs in Agriculture? Studies have observed that AI has the ability to act as an enabler on 134 targets across all SDGs, whereas 59 targets lag behind and are impacted negatively by the emergence of AI (Vinuesa et al. 2020). Evaluating some of the impacts of AI on agriculture leads us to deeply assess scenarios within which AI would impede SDGs in the agriculture sector.
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
391
5.1 Need for Massive Computational Resources AI technology systems as well as some of the other emerging technologies like blockchain (including IoTs), their research, and product design require huge computational resources. These computational resources are available through massive computing centres which require extremely high energy to run. These in turn generate large amounts of carbon footprint. For instance, blockchains are expected to utilise as much electricity as some countries’ overall electricity needs and hence may compromise SDG 13 (Climate Action) and SDG 7 (Affordable and Clean Energy) unless backed by clean energy sources.
5.2 Potential to Accelerate Inequalities While AI technology systems function as a catalyst to attain the 2030 agenda, they also have the ability to generate inequalities. Agriculture is the largest employer globally, and employment opportunities are not keeping in sync with a rapidly increasing population. Automation of routine tasks for better efficiency is bound to displace human labour in the agriculture sector, thereby exacerbating employment challenges, especially in developing countries (Fraser and Charlebois 2016). This displacement may compromise our global aim to achieve SDG 1 (No Poverty), SDG 5 (Gender Equality) and SDG 10 (Reduced Inequalities).
5.3 Uneven Distribution of AI Systems With the unequal allocation of AI systems, the poor accessibility of AI systems in developing countries is a major challenge. For instance, advanced AI agricultural tools can be inaccessible to smallholder workers and, therefore, create a greater gap, particularly to the producers in the developed countries. This might impact our global goal to achieve SDG 10 (reducing inequalities within and between countries).
6 Challenges with AI Adoption in Agriculture The ability of machines to analyse, process and solve any perceivable set of information and data within a physical and natural setting has far superseded human capabilities. It is because of breakthrough advances in many technologies, including satellite imagery, cloud computing, machine learning, deep learning, artificial neural networks, etc. This has now made it possible to ‘algorithmise’ agriculture, with the help of overwhelming data being collected about the different parameters and
392
S. Ziesche et al.
conditions of agriculture. However, as with any major transformation that is considered to mark a paradigm shift, the penetration of AI technologies in agriculture is also confronting its own set of deep-seated suspicion and challenges that threaten to mar its future progress. Although it has been observed that AI combined with IoT will radically transform crop production in the future, certain challenges come with using AI in the sector. Some of the challenges leading to the adverse impacts are listed below:
6.1 Structured and Coherent Data The data systems are currently highly fragmented within the sector. The data in the agriculture sector is scattered in different parts, including supply chains, agro, genetics, livestock and marine, and with diverse purposes such as data representation, data exchange and layered applications and, therefore, requires consolidation and organisation. Besides, data collection could be restricted as the crop-specific data is available two times a year, mostly during sowing season, and the all-year availability of data is not a possibility. This could limit the development of a mature database and robust AI technology. Therefore, there is a need to bridge the different aspects within the sector and connect all the areas to establish a uniform and conventional adoption of measures for the sector. This will enable the rapid adoption of standards and a stronger agenda-setting in developments across the sector (Archer 2017).
6.2 Lack of Knowledge The sophisticated nature of this digital revolution makes it really difficult for an average farmer to understand and implement these technological solutions to improve farming methods, especially in developing countries. To overcome this knowledge gap and suspicion related to the emerging technology would present a serious bottleneck in adopting AI solutions in agriculture. To boost crop productivity, precision farming requires the implementation of cutting-edge technology. For the farmer, establishing an IoT architecture and sensor network for the field can be challenging and burdensome. There is no room for tech errors and defective management in the agriculture sector. This can lead to disastrous consequences. Therefore, it is of utmost importance to equip farmers with the concept of smart farming – using tools and equipment and its implementation.
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
393
6.3 Limited Scope of Scalability As the properties and characteristics of agricultural fields significantly vary across the geographical regions and landscapes, the IoT and AI pilots are confronted with the issue of producing scalable solutions. Agricultural quantification is an extremely difficult task to accomplish even within a definable and coherent region, as no two agricultural fields are completely alike. The conditions are always prone to change. The difficulty in predicting the weather, variability in soil quality and the constant possibility of disease and pest threats make the experimentation and implementation of innovative technologies much more challenging and laborious (Linaza et al. 2021; Byrum 2019). Some researchers argue that the data currently being retrieved across the fields is not sufficient to be considered ‘big data’. There is considerable debate regarding whether the presently available agriculture data is fulfilling the five Vs of big data dimensions identified as volume, velocity, variety, veracity and valorisation (Rubio and Mas 2020). The small size and scale of agricultural farms worldwide is another major challenge to AI adoption. According to FAO, five out of every six farms in the world is less than two hectares, and they produce about 35 per cent of the world’s food (FAO 2021). The fragmentation of spatial data is a major limitation as it hinders the collection of ‘big data’, which is a critical requirement for developing and scaling AI solutions.
6.4 Poor Awareness of the Farm Production Functions Since the production function is not the same for all the crops and its production function changes according to varying farm zones and over the crop growth cycle, there will always remain the possibility of incorrect inputs in the applications (for instance, spraying excessive nitrogen fertiliser), which could result in crop destruction. This requires the training of AI systems to adequately optimise output levels by making the ideal utilisation of the available and limited data (Fakhruddin 2017).
6.5 Technological Infrastructure and Investment Connectivity, acceptability, the safety of IoT devices, loss and manipulation of data, database issues and denial of service attacks are real concerns that stand in the way of AI penetration in rural areas. Moreover, the high cost of hardware devices, software and their operations, updates and maintenance will add to the already existing concerns of insufficient rural infrastructure. Uncertainty of costs regarding fuel and water allocations lowers the margins for farmer investments. Thus IoT-based solutions are challenging for small-scale farmers (Villa-Henriksen et al. 2020).
394
S. Ziesche et al.
7 Conclusion Although the implementation of AI in the agriculture sector is only in the initial stages, it holds tremendous future potential against the challenges that threaten the sustainability of food production and supply. Based on the above discussion, developing a sustainable AI program for agriculture would be predicated on the following interwoven factors: (a) To achieve the outlined agriculture-related SDG targets according to their indicators We identified the following SDG targets as linked to agriculture: 2.1, 2.2, 2.3, 2.4, 2.5, 2.a, 2.b, 2.c, 5.a, 6.4, 7.2, 8.3, 12.3, 13.2, 14.1 and 15.3. For example, Vinuesa et al. (2020) analysed to what extent AI could act as an enabler to achieve these targets. (b) To achieve the outlined advanced agricultural outputs We proposed the following parameters to be crucial for advanced agricultural outputs: selection of high resistance varieties of crops, working and learning with advanced data, precision agriculture, labour force and skills, maximised returns, chatbots for farmers, intelligent crop planning as well as postharvest value chain operations. (c) To prevent the outlined potential impediments of other SDG targets We noted that supporting the above parameters involves risks to impede other SDG targets, which are the need for massive computational resources, the potential to accelerate inequalities as well as an uneven distribution of AI systems. (d) To tackle the outlined challenges in AI adoption in agriculture We also identified challenges, which have to be addressed to progress with AI in agriculture, categorised as follows: structured and coherent data, lack of knowledge, the limited scope of scalability, poor awareness of the farm production functions as well as technological infrastructure and investment. As a way forward towards these goals we suggest to focus on the following topics:
7.1 Strengthening of Skills and Capacities There need to be more investments into skill development in order to increase human capacity and adaptability to new methods. Therefore, effective AI design should be accompanied by a comprehensive training and capacity-building programme for all involved stakeholders. Greater ‘digital literacy’ amongst farmers can enable them to use necessary digital platforms and tools effectively. Government employees and other key actors must also be targeted through extensive education and training programmes to increase the effectiveness of programmes (Birner et al. 2021).
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
395
7.2 Cooperation and Readiness Amongst Key Stakeholders There are several stakeholders in addition to the farmers when it comes to AI and agriculture, such as national governments and soft- and hardware companies (Birner et al. 2021). They all will need to be involved in creating a unified framework for issues such as data rights, privacy, consent management, benefits and rights of farmers to ensure equitable participation and protection of all stakeholders. There must be efforts to ensure greater digital and financial inclusivity for all. There is not just an increase in the concentrated market power of large agribusiness enterprises but small-scale farmers. This can be done by combining and coordinating private and public action that benefits people and the planet both through forming multi-stakeholder partnerships involving farmers, farm labourers, national and state governments, industries, research institutions, start-ups and other businesses. To advance readiness amongst farmers, especially those located in remote regions, it is crucial to establish policies that create increasingly conducive business environments (Birner et al. 2021).
7.3 Mitigation of Data and Infrastructure-Related Risks The complexity and interrelation of data need to be sufficiently established and addressed. There are major obstacles that will arise due to the fragmentation of technological development in agricultural processes. These include issues related to control and operation of IoT/AI machines, data storage, data sharing and management and interoperability, amongst other factors (Alreshidi 2019). AI solutions may not be generally applicable, and their customisation according to local factors and characteristics will play a huge role in determining their success. Moreover, the ongoing digital agriculture transformation has to progress further, which requires major transformations of agricultural systems, rural communities and natural resource management practices which can be advanced via mobile devices, precision agriculture and remote sensing technologies (FAO 2019). Providing communication/Internet infrastructure, especially in rural areas, would help ensure equitable access for underprivileged groups. Given the high costs of cognitive solutions for farming, there needs to be greater affordability in order to ensure higher penetration and rapid adoption amongst farmers.
7.4 Readiness of the Key Stakeholders In this chapter, we have motivated the importance of AI applications in agriculture and associated SDG targets. We then outlined opportunities and challenges. We proposed a way forward, which relies on strengthening skills and capacities, the readiness and cooperation amongst key stakeholders and the mitigation of data and infrastructure-related risks.
396
S. Ziesche et al.
References 2030 Vision Global Goals. AI and SDGs: The State of Play. https://assets.2030vision.com/files/ resources/resources/state-of-play-report.pdf\. Alreshidi, E. 2019. Smart Sustainable Agriculture Underpinned by Internet of Things and Artificial Intelligence. International Journal of Advanced Computer Science and Applications. https:// arxiv.org/pdf/1906.03106.pdf. Archer, P. 2017. Six Challenges for Agriculture. Big Data Europe. https://www.big-data-europe. eu/six-challenges-for-agriculture/. Atlam, H.F., M.A. Azad, A.G. Alzahrani, and G. Wills. 2020. A Review of Blockchain in Internet of Things and AI. Big Data and Cognitive Computing 4 (4): 28. Ayaz, M., M. Ammad-Uddin, Z. Sharif, A. Mansour, and E.H.M. Aggoune. 2019. Internet-of- Things (IoT)-Based Smart Agriculture: Toward Making the Fields Talk. IEEE Access 7: 129551–129583. https://www.researchgate.net/publication/334858202_Internet-of-Things_ IoT-Based_Smart_Agriculture_Toward_Making_the_Fields_Talk. Aydin, S., and M.N. Aydin. 2020. Semantic and syntactic interoperability for agricultural open- data platforms in the context of IoT using crop-specific trait ontologies. Applied Sciences 10 (13): 4460. Baumueller H. et al. 2017. Innovation for Sustainable Agricultural Growth in Ghana. https:// research4agrinnovation.org/wp-content/uploads/2017/11/GhanaDossier2017.pdf Ben Ayed, R., and M. Hanana. 2021. Artificial Intelligence to improve Food and Agriculture Sector. Journal of Food Quality. https://www.hindawi.com/journals/jfq/2021/5584754/. Birner, R., T. Daum, and C. Pray. 2021. Who Drives the Digital Revolution in Agriculture? A Review of Supply-Side Trends, Players and Challenges. Applied Economic Perspectives and Policy: 1–26. https://onlinelibrary.wiley.com/doi/full/10.1002/aepp.13145. Byrum, Joseph. 2019. The Challenges for Artificial Intelligence in Agriculture. plugandplaytechcentre.com, https://www.plugandplaytechcenter.com/resources/artificial-intelligence-agtech/ CGIAR. n.d. Colombia: The Online Authority on Rice. https://ricepedia.org/colombia CIAT. 2018. Is Big Data the Answer? Digitalising Agriculture to Make it Smarter. https://ciat. cgiar.org/annual-report-2017-2018/is-big-data-the-answer/ Fakhruddin, H. 2017. Precision Agriculture: Top 15 Challenges and Issues. https://teks.co.in/site/ blog/precision-agriculture-top-15-challenges-and-issues/ FAO. 2017. The Future of Food and Agriculture: Trends and Challenges. Rome. http://www.fao. org/3/i6583e/i6583e.pdf ———. 2019. Digital Technologies in Agriculture and Rural Areas: Status Report. https://www. fao.org/3/ca4985en/ca4985en.pdf ———. 2021. Small Family Farmers Produce a Third of World’s Food. http://www.fao.org/news/ story/en/item/1395127/icode/ Farming First, CGIAR. n.d. Celebrating Science and Innovation in Agriculture. https://farmingfirst.org/science-and-innovation#section_1 Farooq, Muhammad et al. 2020. Role of IoT Technology in Agriculture: A Systematic Literature Review. Feed the Future Ghana. 2018. Feed the Future Ghana Agriculture Technology Transfer Project FInal Project Report. USAID. https://pdf.usaid.gov/pdf_docs/PA00TQZV.pdf Fraser, Evan, and Sylvain Charlebois. 2016. Automated Farming: Good News for Food Security, Bad News for Job Security? The Guardian, Guardian News and Media, 18 February 2016. www.theguardian.com/sustainable-business/2016/feb/18/ automated-farming-food-security-rural-jobs-unemployment-technology “How to Ensure the Digital Revolution Leaves No One Behind.” Farming First, 15 May 2019. https://farmingfirst.org/digital-revolution-agriculture-leave-no-one-behind IAEA. n.d.. https://www.iaea.org/topics/greenhouse-gas-reduction IDB. 2019. IDB Lab, CIAT and SoftBank Partner to Promote Smart Rice Farming in Colombia. https://www.iadb.org/en/news/idb-l ab-c iat-a nd-s oftbank-p artner-p romote-s mart-r ice- farming-colombia
Role of Artificial Intelligence in Advancing Sustainable Development Goals…
397
IFDC. 2016. Audio and Video Campaigns Reach Farmers Where They Are. https://ifdc. org/2016/02/04/audio-and-video-campaigns-reach-farmers-where-they-are/ Khokar, Tariq. 2017. Chart: Globally 70% Freshwater is Used for Agriculture. World Bank. https:// blogs.worldbank.org/opendata/chart-globally-70-freshwater-used-agriculture. Kothari, Siddharth et al. 2020. How Artificial Intelligence Could Widen the Gap between Rich and Poor Nations. IMF Blog, 3 December 2020. https://blogs.imf.org/2020/12/02/ how-artificial-intelligence-could-widen-the-gap-between-rich-and-poor-nations/ Linaza, M.T., J. Posada, J. Bund, P. Eisert, M. Quartulli, J. Döllner, et al. 2021. Data-Driven Artificial Intelligence Applications for Sustainable Precision Agriculture. Agronomy 11 (6): 1227. https://www.mdpi.com/2073-4395/11/6/1227. Londono, Vilegas, D. J. et al. 2020. Closing Yield Gaps in Colombian Direct Seeding Rice Systems: A Stochastic Frontier Analysis.https://doi.org/10.15446/agron.colomb.v38n1.79470. Rubio, Veronica and Francisco Mas. 2020. From Smart-Farming Towards Agriculture 5.0: Review on Crop Data Management. Sen, S. 2019. Challenges of the Data Ecosystem in Agriculture. https://www.linkedin.com/pulse/ challenges-data-ecosystem-agriculture-satarupa-sen/ Softbank Corp. 2019. Corp. ‘s AI-powered ‘e-kakashi’ Solution for Sustainable Agriculture Adopted for Smart Rice Farming Project in Colombia. https://www.softbank.jp/en/corp/news/ press/sbkk/2019/20191028_02/ Talaviya, Tanha, Dhara Shah, Nivedita Patel, Hiteshri Yagnik, and Manan Shah. 2020. Implementation of Artificial Intelligence in Agriculture for Optimisation of Irrigation and Application of Pesticides and Herbicides. Artificial Intelligence in Agriculture 4: 58–73. https://www.sciencedirect.com/science/article/pii/S258972172030012X. UNDESA. 2021. Policy Brief #110: Time for Transformative Changes for SDGs: What the Data Tells Us. https://www.un.org/development/desa/dpad/publication/ un-desa-policy-brief-110-time-for-transformative-changes-for-sdgs-what-the-data-tells-us/ UNEP. Food Waste Index Report 2021. https://www.unep.org/resources/report/ unep-food-waste-index-report-2021. Villa-Henriksen, A., G.T. Edwards, L.A. Pesonen, O. Green, and C.A.G. Sørensen. 2020. Internet of Things in Arable Farming: Implementation, Applications, Challenges and Potential. Biosystems Engineering 191: 60–84. https://www.sciencedirect.com/science/article/pii/ S1537511020300039. Vinuesa, et al. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. https://www.nature.com/articles/s41467-019-14108-y WEF. 2021. Artificial Intelligence for Agriculture Innovation. http://www3.weforum.org/docs/ WEF_Artificial_Intelligence_for_Agriculture_Innovation_2021.pdf Wight, A. 2019. When It Comes To Tech, These Rice Farmers Are Outstanding in Their Field. https://www.forbes.com/sites/andrewwight/2019/09/15/ when-it-comes-to-tech-these-rice-farmers-are-outstanding-in-their-field/?sh=22f8918e372e World Bank, CIAT. 2014. Climate Smart Agriculture in Colombia. https://cgspace.cgiar.org/ handle/10568/51367 ———. n.d. Employment in Agriculture. https://data.worldbank.org/indicator/SL.AGR.EMPL.ZS World Bioenergy Association. 2020. Global Bioenergy Statistics 2020. http://www.worldbioenergy.org/uploads/201210%20WBA%20GBS%202020.pdf. Young, S. 2020. The Future of Farming: Artificial Intelligence and Agriculture. Harvard International Review. https://hir.harvard.edu/ the-future-of-farming-artificial-intelligence-and-agriculture/.
AI for Sustainable Agriculture and Rangeland Monitoring Natalia Efremova, James Conrad Foley, Alexey Unagaev, and Rebekah Karimi
Abstract This paper examines the applications of artificial intelligence (AI) and satellite imagery in sustainable agriculture, that is, how to allocate resources across the farmland based on the monitoring results from satellite imagery. We first propose a novel framework for addressing climate change-related problems in agri- food sector that considers recent advances in AI and earth observation (EO) data, which describes our approach on high level. We examine the existing Sustainable Development Goals and define a list of targets and indicators where using AI would be the most beneficial for practitioners, researchers, and policymakers. Next, we consider a case of a conservancy, where management needs to decide on how to allocate the resources in a sustainable way. In this case, the resources are cattle herds, which need to be moved across the conservancy for optimal grazing of grass and providing soil nutrition. We characterise the optimal resource allocation policy considering several physical biomonitoring parameters, such as grass biomass, leaf area, percentage of overgrazing, and many others. These parameters are monitored with satellite imagery in a weekly manner over the large territories. We propose an AI-based approach for fast and reliable interpretation of this imagery to provide insights for farmers in a fully automated manner. This monitoring is then combined with a simple resource allocation policy. Our results suggest that (i) the proposed framework can be applied for near real-time monitoring of large territories with a highly accurate estimation of biomonitoring parameters, (ii) the proposed resource allocation method outperforms existing rangeland monitoring practices, and (iii) it
N. Efremova (*) Queen Mary University London, London, UK e-mail: [email protected] J. C. Foley · A. Unagaev DeepPlanet, Oxford, UK e-mail: [email protected]; [email protected] R. Karimi Enonkishu Conservancy, Lemek, Kenya e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_22
399
400
N. Efremova et al.
can be used to estimate whether current agricultural practices are aligned with Sustainable Development Goals, specifically with SDG 2 “zero hunger”. Keywords AI · Climate change · Agricultural management · Rangeland monitoring · Sustainable Development Goals
1 Introduction 1.1 Problem Overview The framework of the 17 Sustainable Development Goals (SDGs) is a challenge for developers and researchers applying artificial intelligence (AI). The 169 targets are measured by 232 indicators each of which require a dedicated “evaluative infrastructure” (Kornberger et al. 2017). Statistical standard-setting within the United Nations is technically and politically complex. The estimated direct cost of measuring all SDGs is over $US 250 billion, excluding opportunity costs (Jerven 2019). Many indicators are at risk of elimination in the following assessment rounds by the technical commission of UN Statistics.1 If the global community does not come up with generally accepted methodologies and if countries are unable to adopt them effectively, the SDGs cannot be monitored which is a repetition of the failures in the preceding millennium development goals. The United Nations already have a loose network of actors and processes that are related to AI.2 We, therefore, argue that the primary purpose of the AI for SDGs framework is achieving the SDG target 17.19: building a systematic partnership to develop measurements of progress on sustainable development that complement GDP. Under the current framework, this target is primarily measured by the $US value of all resources made available to strengthen statistical capacity in developing countries (SDG 17.19.1). AI for Good can contribute in three ways. First, we can help decreasing the cost of data collection and analysis. Second, we can help to enhance the capacity for measurement. This systematic approach allows, thirdly, to embed AI solutions within direct interventions more effectively. One step towards this goal is using AI with earth observation data (EO). AI and EO can provide reliable and disaggregated data for better monitoring of the SDGs. Building on the existing work in relation to poverty and agricultural yields (Burke and Lobell 2017), our project is in the context of rangeland monitoring. Drawing on studies of calculative practices (Miller and Power 2013), Table 1 below shows four main roles of AI as part of an evaluative infrastructure, starting from a mapping role based on earth observation data.
https://unstats.un.org/sdgs/indicators/indicators-list/ https://unstats.un.org/sdgs/unsdg
1 2
401
AI for Sustainable Agriculture and Rangeland Monitoring
Earth observation data is freely available and highly accurate. These data can further be combined on the socio-economic, organisational, and institutional level whereby the roles of AI can be mediating, adjudicating, and ranking. Table 2 lists non-exhaustively the SDGs indicators that can be addressed by AI and EO data. Asterisk (*) indicates the SDGs, addressed by this project or those that potentially could be affected by the outcomes of the proposed approach. We distinguish between 1st generation and 2nd generation of AI-EO applications to SDGs (with more sophisticated AI applications). Some international bodies and working groups suggest the use satellite imaging data for several SDGs (DANE 2016, 2017a, b). Those proposals centre around SDGs, whose indicators primarily use geographic data. A further step is to use AI and EO in contexts where only small sample sizes are available or where states lack the capability to collect and analyse the data. Open-source GIS and data analysis techniques allow us to evaluate progress towards the SDGs and strengthen accountability (Efremova et al. 2019). The UN classifies indicators into three tiers according to two criteria.3 First, a generally accepted methodology exists (methodology criteria). Second, this methodology is widely adopted around the world and states generate sufficient data (adoption criteria). Tier 1 meet both methodology and adoption criteria. Tier 2 indicators do not meet either the methodology or adoption criteria, and tier 3 indicators fail to meet both. Therefore, the most significant contribution of AI and EO can be made regarding tier 2 and 3 indicators. We also note that we have found some tier 1 indicators that are insufficiently measuring the intended target (e.g. climate action targets 13.2 and 13b with Indicators 13.2.1 and 13.b.1). As a result, more SDGs could be identified for improving tier 1 indicators through a systematic AI and EO review. Tier 3 indicators would contribute most from the application of AI-based methods; therefore, the presented case study considers only SGDs with tier 3 indicators. In the next sections, we will discuss this case study in detail. However, first we have to dive deeper into how we can use AI and satellite data to tackle a few of the SDG targets and to present a top-down theoretical model of AI-EO SDG assessment using one of the targets in SDG 2 (zero hunger) as an example.
Table 1 Four main roles of AI as a part of evaluative infrastructure Type of data Geographic Socio-economic Organisational Institutional
Data analysis Constructing global calculative space Identifying needs and vulnerabilities Performance measurement Rating and standardisation
https://unstats.un.org/sdgs/iaeg-sdgs/tier-classification/
3
Role of AI Mapping Mediating Adjudicating Ranking
402
N. Efremova et al.
Table 2 A list of targets and indicators that can be assessed with AI, coupled with earth observation (EO) data SDGs, targets, and indicators 6. Clean water and 6.1.1* sanitation 6.3.1 6.3.2 6.6.1 9. Industries, innovation, and infrastructure 11. Sustainable cities and communities
15. Life on land
1. No poverty
2. Zero hunger
6. Clean water and sanitation 11. Sustainable cities and communities 13. Climate action
14. Life below water
9.1.1
Explanation Change in the extent of water-related ecosystems over time Proportion of wastewater safely treated Proportion of bodies of water with good ambient water quality Change in the extent of water-related ecosystems over time Proportion rural population living within 2 km of all-season road
11.3.1* Ratio of land consumption rate and population growth rate 11.7.1 Average proportion of the built surface of the cities corresponding to open spaces for the public use of all 15.1.1 Forest area as a proportion of total land area 15.2.1 Progress towards sustainable forest management 15.3.1* Proportion of land that is degraded over total land area 15.4.2 Mountain Green Cover Index 1.2.2* Proportion of men, women, and children of all ages living in poverty in all its dimensions according to national definitions 1.4.1 Proportion of population living in households with access to basic services 2.4.1* Proportion of agricultural area under productive and sustainable agriculture 2.5.1 Number of plant and animal genetic resources for food and agriculture secured in either medium- or long-term conservation facilities 6.5.2 Proportion of transboundary basin area with an operational arrangement for water cooperation 11.2.1 Proportion of population that has convenient access to public transport, by sex, age, and persons with disabilities 13.1.2 Number of countries that adopt and implement national disaster risk reduction strategies in line with the Sendai Framework 13.1.3 Proportion of local governments that adopt and implement local disaster risk reduction strategies in line with national strategies 13.2* Integrate climate change measures into national policies, strategies, and planning 13.b* Promote mechanisms for raising capacity for effective climate change-related planning and management in least developed countries 14.1.1 Index of coastal eutrophication and floating plastic debris density
Tier III III III III III
II II I III III III II
III III III
III II
II
II
I I
III
AI for Sustainable Agriculture and Rangeland Monitoring
403
1.2 AI-EO SDG Model: Zero Hunger Below, we propose a top-down approach that can be used to evaluate Indicator 2.4.1 “Proportion of agricultural area under productive and sustainable agriculture”. This task could be decomposed into several sub-tasks. We propose a top-down approach, where we first define high-level objectives, further decomposing them into smaller elements, each of which could be solved using one machine learning method each. In other words, we first want to obtain a large image of the land and classify it on different smaller regions, based on land-use similarity. After this, assign a sub-task to a smaller region of land, which can be used to provide actionable insights for this type of land. Finally, we combine the results of these sub-tasks and provide an overall managerial decision support for the whole region of interest, based on the combination of these individual recommendations (Fig. 1). Note that we only provide recommendations based on our observations, not substituting the decision-maker in this process. In the case of Indicator 2.4.1, a larger task will be finding all agricultural land in the region of interest and classifying it by the type of crop, growing in this region.
2.4.1. ”Proportion of agricultural area under productive and sustainable agriculture”
Satellite image segmentation: crop detection
Detection of more efficient crops in terms of scarce resource consumption
Estimation of the amount of nutrients in the soil
Detection of soil moisture and salinity
Crop yield prediction
Resource optimization Fig. 1 A proposed framework for tackling target 2 (zero hunger) with AI and earth observation data (satellite imagery and ground measurements)
404
N. Efremova et al.
Sub-tasks will include detection of potentially more efficient crops in terms of scarce resource consumption (e.g. fresh water), estimation of the amount of nutrients in the soil, detection of soil moisture and salinity, and crop yield prediction. Finally, we would look at the current usage of these resources (water, nutrients, and crop types) and make a decision whether the land is used sustainably or some improvements can be made to current land management style. The focus of this paper is on final sub-task (resource optimisation); however, we show the importance of each on the steps to achieve this final goal.
1.3 AI-EO SDG Model: Climate Change The earth is experiencing widespread and rapid changes that are already affecting weather and climate across the globe. The scale of these changes is unprecedented and are linked to increasingly variable and extreme weather events, including heat waves, precipitation, drought, and storms. The human influence on these processes is unequivocal and driven by increases in greenhouse gas emissions including CO2. Contemporary farming practices make a significant impact on climate change in terms of soil erosion, contamination groundwater supplies with excessive use of herbicides and fertilisers, carbon emissions, etc. (Houghton et al. 2012). It is estimated that agriculture, forestry, and other land-use activities account for 23% of the total net anthropogenic emissions of GHGs with forestry and land-use change (i.e. those emissions that do not relate directly to agriculture) accounting for 12.5% of global GHG emissions (IPCC Climate Change 2021; Houghton et al. 2012). On the other hand, climate change affects food production and supply chain in multiple ways. It increases the likelihood of extreme weather events and reduces the predictability of weather, and non-optimal growing conditions for crops may become more likely. Therefore, the farmers need to adapt their practice to changing environment to maintain the same level of crop yields. Additionally, supply chain needs to know the location of the farms, address the approximate yield of the farms in the region of interest, and increase local suppliers where possible to minimise “climate costs” of mass food transportation. Finally, policymakers need to have access to the above-mentioned data to be able to address emerging problems as quickly as possible and help growers and sellers to act efficiently when the unexpected climate events (such as floods, droughts, fires, etc.) disrupt business as usual in terms of both production and supply chain logistics. Therefore, the proposed model can be applied to climate change target twice: first time, to identify ways to minimise negative effect of agricultural practices on the climate change, and, second time, to predict negative effects from climate change on the agriculture and other farming practices. The first method will be analogous to the set of actions that was described in the previous chapter since the goal here will be to minimise scarce resource consumption and maximise the produced output. The second part, however, would have a slightly different structure. We will first identify the high-level regional changes that emerge in the region of
AI for Sustainable Agriculture and Rangeland Monitoring
405
interest. For example, unpredictable changes in precipitation patters can cause unusual floods and droughts. As a result, farmers cannot predict how much to irrigate and when and how quickly soil is absorbing the water in case of rains. Precision water measurement and monitoring can reduce water waste up to 18%. In urban monitoring, measures can be taken to prevent loss and damage of property (Albert et al. 2017). Changes in temperature patterns can cause frost damage to agricultural crops. Changes in precipitation and temperature create favourable conditions for diseases outbreaks throughout the year. All these changes can be monitored though time series analysis of EO data, and researchers can provide recommendations based on the detected anomalies in one or few of these measurements. A very interesting application of AI and EO data is large-scale monitoring of carbon, sequestered by plants (biotic carbon) and stored in the soil (soil organic carbon). Such monitoring can help farmers to assess the available carbon stock on the one hand and to improve current farming practices on another. Assessment of carbon offset strategies requires inputs from multiple fields of science, including engineering, plant science, conservation, AI, and agriculture. Newly developed AI strategies can significantly improve the existing tools and can help to implement them on global scale. The most natural way to mitigate carbon emissions is to estimate natural uptake of CO2 by plants and soil. We can consider the following ways to sequester CO2 that can be monitored by machine learning tool: carbon sequestered in peatlands and forests and amount of carbon that can be potentially produces by afforestation of available regions, for example, land in which forests have been destroyed in a previous decade (Rolnick et al. 2019). Similar model, but in a more complex form, can be applied to carbon stock assessment as well. Modelling (and pricing) carbon stored in forests requires us to assess how much is being sequestered or released across the planet. Most of a forest’s carbon is stored in above-ground biomass, so tree species and heights are a good indicator of the carbon stock. The height of trees can be estimated accurately with satellite aperture radar (SAR) imagery. Planting trees, also called afforestation, can be a means of sequestering CO2 over the long term. Up to 0.9 billion hectares of extra canopy cover could theoretically be added globally. However, care must be taken when planting trees to ensure a positive impact. Afforestation that comes at the expense of farmland could result in a net increase of GHG emissions. Moreover, planting trees without regard for local conditions and native species can reduce the climate impact of afforestation as well as negatively affecting biodiversity. AI can be helpful in automating large-scale afforestation by locating appropriate planting sites, monitoring plant health, assessing weeds, and analysing trends. Soil organic carbon (SOC) is a valuable resource for mediating global climate change and securing food production. Despite an alarming rate of global plant diversity loss, uncertainties concerning the effects of plant diversity on SOC remain, because plant diversity not only stimulates litter inputs via increased productivity, thus enhancing SOC, but also stimulates microbial respiration, thus reducing SOC (Chen et al. 2019). Plant diversity can be assessed with the AI methods that we describe in the following sections in detail.
406
N. Efremova et al.
2 Background 2.1 Rangelands Rangelands comprise approximately up to 70% of global terrestrial area (Boone et al. 2018; Briske et al. 2015). Intact rangelands provide diverse ecosystem services as the primary providers of carbon sequestration, hydrologic cycling, nutrient cycling, air purification, biodiversity, and cultural services (Holechek et al. 2020). Many of the ecosystem services provided by rangelands contribute to climate stability and are essential to human life (Brown and Thorpe 2008). The predominant use of rangeland is livestock grazing, essential to the livelihoods of 200 million Africans who rely on them for income from sales of milk, meat, and skins, and for protein consumption, draft power, and ritual and spiritual needs, among other uses (Boone et al. 2018; Hoffman and Vogel 2008). As rangelands are managed by ecological rather than agronomic means, they also provide forage for wildlife and are often dominated by native species (McCollum et al. 2017). The effects of climate change on tropical rangelands are likely to be negative, including sudden changes in climate and extreme weather events, variability in the livestock market, disease outbreaks, and unreliability of water sources (Herrero et al. 2016; Hoffman and Vogel 2008). Higher temperatures combined with drought will impair livestock production by negatively impacting animal physiological performance, increasing ectoparasite abundance, and reducing forage quality and quantity (Polley et al. 2017). Other human impacts, like changes in land-use patterns, intensification of disturbances, and species introductions and movements are likely to further challenge ecosystem integrity and functionality (Polley et al. 2017). Demand for livestock products is projected to increase as peoples’ livelihoods improve (Niamir-Fuller et al. 2012) until the middle of this century, which will put increased pressure on livestock farmers to maximise stocking rates on rangelands (Boone et al. 2018). As drought becomes more common, feed subsidies will allow farmers to stock rangelands at an unsustainable rate, further degrading the rangeland and undercutting the linkage between economics and ecology (Holechek et al. 2020). Overgrazing and unsustainable farming of fertile patches of rangeland is likely to increase as the pressure from climate change rises (Eldridge et al. 2011). However, eventually, climate change is likely to cause a decline in livestock of 7.5–9.6%, an economic loss of $9.7–12.6 billion which will have a devastating effect on 550 million poor people (earning less than $1.25/day) who depend on livestock as one of their few or only assets, 58 millions of whom rely on rangelands to support their livelihoods (Boone et al. 2018). The managers of livestock within rangelands are already utilising adaptive management and have the capability of adapting and changing the livestock industry as the climate grows more variable, a luxury many arable farmers are not privy to (Ash et al. 2012; McCollum et al. 2017). Mitigation measures include increasing
AI for Sustainable Agriculture and Rangeland Monitoring
407
flexibility to take advantage of periods of favourable production by consistently adjusting stocking rates according to sustainable use of the rangelands, selection of breeds suitable to the changes in the climate such as drought resistant breeds, adapting pest management, implementing supplemental forage, securing additional water sources, preparing shelters for livestock where sun exposure or temperate conditions may accelerate, and even a geographic relocation of cattle to a more suitable habitat which may require securing land in different areas (Briske et al. 2015; Reeves et al. 2017). The strategic allocation of resources and the development of institutions and technology are critically necessary to efficiently prepare managers of livestock and rangeland for future climate variability by building a robust system for effective implementation of necessary adaptation strategies (Herrero et al. 2016). By implementing climate change mitigation strategies, livestock farmers have the capability of restoring the ecosystem services provided by rangelands by sustainably managing their herds and herd movements. Managed responsibly within a well-informed grazing plan, livestock can be utilised to rehabilitate degraded rangelands, enhancing the forage available to wildlife and revitalising carbon sequestration (Tyrell et al. 2017; Schuman et al. 2002). Livestock enclosures can be utilised to rehabilitate areas of severely degraded soil (Riginos et al. 2012). As rangelands experience increased pressure from lower rainfall and climate variability, successful rehabilitation of rangelands through responsible animal husbandry will be imperative (Popp et al. 2009). For many rangeland managers, current information on rangeland quality across vast landscapes is not readily available because of the scale of the size of the land. Oftentimes, herders use their traditional ecological knowledge to evaluate rangelands and determine where the herds should move next (Jamsranjav et al. 2019). Scientists have also developed field monitoring techniques to evaluate rangeland quality and inform herd movements (Bolo et al. 2019). Both of these techniques are effective but grow in complexity as the size of a managed landscape expands. Herders may not frequent all areas that their herds have access to, and unless the landscape is small, field monitoring may not be feasible as it is labour-intensive and time-consuming to collect enough data to adequately inform decisions (Allred et al. 2021). As climate change is bound to demand more frequent management decisions, equipping rangeland managers with a full picture of rangeland quality across the landscape will enable them to proactively engage in sustainable rangeland management to maximise the benefits of herd movements. The field of satellite imagery and incorporating artificial intelligence to upscale existing field monitoring data has vast potential to promote the implementation of sustainable rangeland management and inform decision-making across entire landscapes of rangeland (Bestelmeyer et al. 2021; Jones et al. 2020). Already, rangelands in the Western United States have been evaluated using satellite imagery, and the information obtained is being used to inform decisions on a national level, especially in management decisions to restore degraded areas (Di Stéfano et al. 2020).
408
N. Efremova et al.
2.2 AI for Climate Change and Agriculture: Overview Researchers in machine learning are helping to speed up the progress in tackling SDGs by working together with practitioners from various industries, from subsistence farmers to global tech giants, to implement state-of-the-art machine learning tools. As we have mentioned before, two approaches to tackle climate change are mitigation and adaptation. Mitigation refers to reducing the causes of climate change, such as using fossil fuels or reducing gas emissions. Adaptation refers to minimising the consequences of climate change and dynamic adaptation to the shifts in weather and environment overall. Integrated assessment models use climate science and socio-economic factors to understand costs and benefits of different pathways, finding the lowest-cost one (Wedding et al. 2021a, b). Within the climate change mitigation framework, the following areas can be enhanced with ML tools: electricity systems, transportation, industrial landscape, forestry and agriculture, and carbon dioxide tracking and removal. Adaptation framework includes climate prediction, reducing societal impacts, solar geoengineering. Overarching frameworks refer to global policies, markets, education, and finance (Rolnick et al. 2019). Both mitigation and adaptation approaches find its applications in agri-tech and precision agriculture. When we talk about mitigation, we want to mention multiple monitoring tools: water content monitoring, soil structure and soil carbon monitoring, crop detection, and crop disease monitoring. Agricultural practices are a major contributor to climate change. Large amounts of energy are consumed by chemical synthesis, irrigation, and farm machinery, while greenhouse gas emissions also arise from the decomposition of fertiliser and organic matter in soil. Industrial farming practices rely heavily on pesticides and fertilisers; phosphorus and nitrogen leaking into groundwater threaten human health and aquatic ecosystems; all this leading to soil susceptibility to diseases and to droughts through loss of water holding capacity.4 Irrigation for agriculture is the largest freshwater usage in the world accounting for 70% of freshwater usage, and monitoring techniques for soil moisture can be used to construct precision irrigation regimes that save water making for a more sustainable world. Agricultural monitoring provides timely and reliable way to access the state of the field or farm and the surrounding territories, utilised for gathering data and producing forecasts. Monitoring with satellite imagery and other remote sensing tools, such as drones is becoming mainstream, since it provides rapid precision data across the entire globe. Monitoring can be performed with various tools, including ground observations, satellite, aeroplane and drone imagery, or sensor networks. These tools are widely used in weather monitoring, drought, hurricane and flooding prediction, and nutrient and water content detection in the soil. However, this process https://www.forbes.com/sites/alexknapp/2018/11/13/trace-genomics-raises-13-million-to-give-cornand-soybean-farmers-insight-into-soil-health/?sh=3749b6de1f19 4
AI for Sustainable Agriculture and Rangeland Monitoring
409
requires a lot of work, since imagery often requires preprocessing to get into a usable format and then beyond this it may need computer vision processing of satellite imagery (Demir et al. 2018), cloud removal (Singh and Komodakis 2018), and complex band calculations (Lees et al. 2020).
3 Methods 3.1 Overview of the Proposed Approach The proposed approach utilises state-of-the-art artificial intelligence algorithms, combined with earth observation and historical data to provide near real-time monitoring of rangelands, early warnings of negative events, and recommendation actions to improve undesired situations. The method, presented in this work, shows that it is possible to perform automated monitoring of rangelands in an accurate and scalable way. This allows for rapid and frequent insights into the condition of areas, including their suitability for both grazing and nature conservation. Precision monitoring allows for the creation of adaptable grazing regimes which alter how long each area is grazed or which area to move grazing to depending on the condition of an area or the surrounding areas. This can in then be used to help maximise the use of an area for both conservation and economic. Satellite imagery has been used to monitor the health of vegetation since the inception of multispectral imagery with vegetation indices such as the normalised difference vegetation index (NDVI) being widespread (Rouse et al. 1974). It allows for the monitoring of large areas without having to physically be there drastically reducing labour and costs requirements. Machine learning methods have become popular tools for the analysis of satellite imagery and other remote sensing data. Particularly computer vision tasks such as classification from satellite imagery are a powerful tool to increase understanding of large areas (Albert et al. 2017; Iino et al. 2018). As such these tools are ideally suited for monitoring of community conservancies giving the ability to understand the health of the entire conservancy and pinpoint areas which are less healthy and overgrazed or predict the yield of grass for cattle allowing for sustainable harvest. The proposed methodology can be integrated within the web platform to provide weekly visualisation of over 18 essential biomonitoring parameters, such as overgrazing, bare soil, leaf area, grass biomass estimates, and others. It represents a novel framework for addressing climate change-related problems in agri-food sector that considers recent advances in machine leaning (ML) and earth observation data (EO) and usage of freely available satellite imagery (European Space Agency Sentinel 2 satellite). We provide guidelines for managers on how to effectively adjust how they plan, site, forecast, innovate, and develop products and services within the agri-food sector using EO data and ML tools. Besides theoretical framework, this method contributes to the practice on adaptive rangeland management.
410
N. Efremova et al.
We show that strategic resource allocation, including cattle movement within the rangelands to adjust grass biomass, contributes to climate change mitigation measures made on land, prevent soil erosion, and balance damage to the land from wildlife. The case study considers a practical tool, including novel ML techniques to monitor 18 strategic parameters over the large territories. The pilot was done on as Enonkishu Conservancy (4000 acres) in the Maasai Mara ecosystem of Kenya and deployed with the support of the European Space Agency. The results of the pilot project showed the significant increase in capacity to monitor the rangeland territories, opportunity to make more efficient management decisions, and overall improvement of agricultural practices.
3.2 Data This study was performed over the Enonkishu Conservancy in the Maasai Mara, Kenya. The Mara Serengeti ecosystem is in a vulnerable state with the threat of human encroachment and associated activities such as extensive overgrazing and firewood and charcoal production. As the Mara conservancies provide a habitat for most of the biodiversity in the Mara region, it is imperative that areas of severe degradation are rehabilitated to support biodiversity of wildlife and minimise environmental impact due to water run off on bare soil. Since 2013, Enonkishu (1705 hectares) and the Mara Training Centre have been conducting intensive monitoring of the vegetation to drive rangeland improvements such as regenerative grazing by livestock. While adjacent rangelands experience several hundred livestock fatalities annually, Enonkishu’s regenerative grazing strategies have eliminated livestock fatalities. However, monitoring has been very labour-intensive and as a result is difficult to scale to the entire Maasai Mara Serengeti region (250,000 hectares). Enonkishu is a research community conservancy with a structured grazing regime to determine sustainable levels of cattle grazing balanced with conservation needs. The area is monitored by a team of scientists, rangers, and volunteers who periodically perform transects of areas recording several biomonitoring parameters in five 1 m quadrats for each sampling site (Weaver 1918). This creates a dataset that has been recording since 2014 with on average one record period per quarter, and this was clipped to the date period that satellite imagery is available for. For each block a total of 18 parameters was monitored for 6 years. Table 3 shows the most essential parameters (Figs. 2 and 3).
AI for Sustainable Agriculture and Rangeland Monitoring
411
Table 3 Biomonitoring parameters Parameter Biomass Plant density Plant height Leaf area Plant young Plant mature Overgrazing
Description A measure of the total amount of harvestable grass The percentage of a sample that is plant matter The height of the plant out of maximum possible height The area of the sample which are leaves The percentage of the area where the plants are young The percentage coverage of mature plants Whether an area is overgrazed or not
Range Cont. distribution 0–5 quantiles 0–5 quantiles 0–5 quantiles 0–5 quantiles 0–5 quantiles Binary
Fig. 2 Map of Enonkishu conservancy. The pins show the location of data collection sites in the conservancy
3.3 Model Architecture To tackle the problem of biomonitoring parameter estimation, we use the top-down approach, proposed in the introduction, as depicted in Fig. 1. This approach first identifies high-level properties of the region that we want to analyse, such as type of land cover, and then uses them as an input to predict measurements (in our case biomonitoring parameters) for each land cover type. Finally, we provide actionable recommendations to farmers on resource optimisation, in this case cattle movement across rangeland territory and preventing further land degradation from overgrazing.
412
N. Efremova et al.
Fig. 3 The photos show the examples of the data, collected from quadrats
Fig. 4 Habitat classification with semantic segmentation
3.3.1 Habitat Classification First step is a high-level division of the region of interest into sub-types. To detect different types of land cover, we built a semantic segmentation model for habitat classification. In AI, sematic segmentation model is a computer vision tool that allows to detect pixels of the image, which belong to the same class. We downloaded Sentinel 2 12-band satellite imagery over the conservancy from the start of available imagery (December 2016) until July 2020. These images were then filtered to remove cloudy images, leaving a dataset of 123 images. Atmospheric correction was then applied to produce images ready for data processing. The images which were taken during the flooding period were masked out. To fill in the gaps in the time series, linear interpolation techniques were used to interpolate between preceding and consequent time slices. From 2 patches, randomly sampled patches of 64 × 64 pixels, including all 12 bands, went into the input. Figure 4 below shows a few examples of such classification.
AI for Sustainable Agriculture and Rangeland Monitoring
413
We performed habitat classification using a U-NET model, trained on labelled habitat masks (Ronneberger et al. 2015). The labelled class masks were created using ground truth data and a geographic information system (GIS) expert for a total of 11 labelled masks of 6 classes (agriculture, bare soil, forest, grassland, shrubland, and water). For convenience, the data was split into two patches, with a separate mask for each patch. Therefore, there were two masks for each class, with 1 mask for water, since it was present only in 1 patch, but not the other, which resulted in total of 11 masks. 3.3.2 Biomonitoring Parameter Estimation For estimation of sub-tasks, we have used historical data, collected on the ground and ESA Sentinel imagery data, downloaded for the same period (as described above). A series of 20 different vegetation and moisture indices were calculated from each image. At every sampling date from the biomonitoring dataset, the closest satellite image was determined, and then the pixel values of all vegetation indices at the sampling locations were extracted. A random forest model (Breiman 2001) was constructed for each biomonitoring parameter, taking as an input the three vegetation indices which best correlate to that parameter, the habitat type, and the season. The data was split into train and testing data on an 80–20 split. Once the models were trained and validated, the vegetation and habitat values from the entire conservation area were extracted from the satellite image and predicted on. The resulting prediction was then reshaped back into the shape of the conservancy providing a predicted map across the entire area of interest. Overgrazing prediction was performed using the same dataset, but a time series of four images was used for each prediction. A deep learning model uses an LSTM autoencoder structure (Sutskever et al. 2014), where the input is a single pixel from the image corresponding to the 20 vegetation indices plus the season that image was taken in and the habitat type of the pixel, four of these corresponding to the four most recent images are stacked, and the time series is the input of the LSTM at each step. Due to overgrazing being a rare event, it was a minority class in the binary classification model making up ~15% of observations; to prevent overfitting to the majority class, synthetic data of the minority class was generated using Synthetic Minority Oversampling Technique (SMOTE) within the training data to bring it up to a 0.5 ratio (Chawla et al. 2002). The model was then trained on this dataset until binary cross-entropy error had converged and then tested on 20% of the data to determine accuracy. 3.3.3 AI Model Results Overall performance of the AI models was surpassed the state-of-the-art model performance. On land cover segmentation task, we achieved 79% accuracy (training loss, 0.32; validation loss, 0.1; test loss, 0.79), although the model showed some
414
N. Efremova et al.
confusion between the agricultural land with the bare soil. One possible explanation to this is that for prolonged periods of time, the crop is harvested on the agricultural area leaving it bare. The biomass parameter was capable of being predicted with a very high accuracy. It had a training accuracy of 98% and a testing accuracy of 97% on biomass measures that can range from 1000 to 2500. In general, the areas of grassland and shrubland had higher biomass than forest areas, so for visualisation these were separated into two separate images (Fig. 5). The parameters which are classified into 0–5 quantiles had varying accuracy depending on the parameter, due to the nature of it being discrete ordinal classes error was generally around 0.5–1. Each parameter had a different mean error and error distributions. Plant density estimation parameter is described on Fig. 6. Overgrazing is one of the most important parameters for the conservancy management, as it allows to make managerial decisions about moving cattle (mobile bomas) around the conservancy. Overgrazing model prediction had a training accuracy of 98% with a validation accuracy of 92% validating on 5% of training data; on the testing dataset, there was an overall accuracy of 86% with 84% accuracy of overgrazed and 92% accuracy of non-overgrazed pixels. Overgrazing was predicted across all grass and shrub areas creating a binary image of either overgrazed or not overgrazed pixels (Fig. 7). The proposed approach achieved better performance to similar studies. Previous work has focused particularly on biomass estimation using imagery as this is the primary feature of importance for determining sustainable grazing for livestock. Previous studies created regression models including random forest regressions to determine biomass and other plant health indicators. One study using high-resolution 0.5 m WorldView-2 imagery over rangelands in South Africa developed a random model explaining 84% of variation (Ramoelo et al. 2015), while another model using Sentinel 2 data on rangelands in Ethiopia achieved a similar accuracy of 0.87 (Meshesha et al. 2020). The model presented here outperforms both these studies using the same or lower resolution imagery. Such a good resulting accuracy indicates that the model can be applied for monitoring purposed in real-life applications. Since these measurements are not critical,
Fig. 5 Biomass estimation for grassland, shrublands, and forest land cover types
AI for Sustainable Agriculture and Rangeland Monitoring
415
Fig. 6 Estimation of plant density across the rangelands. The colour coding shows the highest vegetation areas as blue and the lowest as red, respectively
the resulting error can be neglecter, and the results can be used for the managerial decisions on land. In this case, managerial decisions would be around managing the land, moving the cattle, etc. 3.3.4 Decision Support Model We summarise a model that will use the above-mentioned parameters to predict the movement of the cattle in the region. We propose an AI method that will use the inputs from the algorithms, described in the previous section, to suggest the sequence of actions to maximise the usage of resources. In this case, it will be the next allocation of mobile bomas (cattle) (Fig. 8). The inputs of the proposed model are the following: Input Habitat classification with U-net semantic segmentation model
Measurement (unit) 0 to 5 depending on the type of the land surface (agriculture, bare soil, forest, grassland, shrubland, and water). We move the cattle between blocks that combine a few types of the vegetation. Therefore, we apply the Biomass (yield) prediction Cont. distribution Plant density 0–5 quantiles
416 Input Plant height Leaf area Plant young Plant mature Overgrazing Soil moisture Anomaly detection
N. Efremova et al. Measurement (unit) 0–5 quantiles 0–5 quantiles 0–5 quantiles 0–5 quantiles Binary (0/1) Cont. distribution Binary (0/1) (whether there were anomalies or not)
Fig. 7 Overgrazing prediction. Red areas indicate the places with highest degrees of overgrazing
The manager needs to decide on whether to move cattle between the blocks and where to move it. Therefore, a time series of the input features in the current week should be considered for the meaningful prediction of cattle movement over the next few weeks. The resulting decision should be transparent to a decision-maker; therefore, we need to choose a tool that can provide an explanation of a recommended decision. Therefore, our choice of the model included explainability requirement. This problem can be treated as a classical resource allocation problem. For this problem, a variety of models was proposed in previous work, from dynamic programming (Kamien and Schwartz 1991; Boyabatlı et al. 2019) to graph neural
AI for Sustainable Agriculture and Rangeland Monitoring
417
Fig. 8 Model implementation for efficient resource allocation
networks (Cranmer et al. 2021). However, due to restricted amount of data, we utilise fuzzy logic approach to resource allocation (Badinelli 2012).
4 Discussion This technology narrows the gap between theoretical research in AI and space sectors and practical business applications in agriculture. We have described above a satellite imagery-based AI model, which was implemented in a browser-based application. The conservancy management was able to obtain new predictions for the whole conservancy on a weekly basis. With the pressing need to balance natural resources, wildlife conservation and human livelihood better monitoring of non-urban areas is critical. With satellite imagery crucial monitoring becomes possible on a scale that may help to prevent the further decline of nature. The methodologies presented here show that it is possible to perform automated monitoring of rangelands in an accurate and scalable way. This allows for rapid and frequent insights into the condition of areas, including their suitability for both grazing and nature conservation. Precision monitoring allows for the creation of adaptable grazing regimes which alter how long each area is grazed or which area to move grazing to depending on the condition of an area or the surrounding areas. This can in turn be used to help maximise the use of an area for both conservation and economic. While this technology has been tested and applied on East African rangelands, the principles can be applied across numerous areas. Within Africa the Sahel region on the edge of the Sahara Desert is expanding at an alarming rate every year. Overgrazing by initially cattle and now goats has caused large-scale erosion of soil and desertification (Picardi and Seifert 1977). While there are actions already being taken to counteract (Kaptué et al. 2015) this such as the green wall initiative (Picardi and Seifert 1977), there is still a large gap in monitoring of it, and tools like this may be able to monitor and advice current or future management plans. Beyond Africa there is currently large attention on large- scale deforestation within the Amazon rainforest (Boëtsch et al. 2017; Shukla et al. 1990). Monitoring of the Amazon has been using similar tools (Tucker and Townshend 2000; Brovelli et al. 2020; Werth and Avissar 2002), but applying them for monitoring of the condition post-deforestation could allow for the reuse of deforested areas with grazing schemes designed to prevent further deforestation. Many general pasture settings such as meat or dairy cattle could benefit from precision grazing regimes to maximise the health of the livestock animals while
418
N. Efremova et al.
minimising the damage to the land. Understanding the health of grazable land is a key component of sustainable agriculture.
4.1 Challenges and Economic Implications One of the main bottlenecks in working with satellite imagery is the lack of data. One of the most popular satellite constellations, European Space Agency (ESA) Sentinel 2, provides weekly high-resolution imagery worldwide. However, the imagery is accessible only from 2015 when the satellite was launched. At the same time, labelling earth observation imagery is a manual process that requires expert knowledge, and, therefore, it is very expensive. Both the lack of data and expensive labelling make it difficult to build AI systems for satellite imagery. Augmenting long-range datasets such as aerial and satellite imagery with manually collected high-resolution samples is a tedious task. Many state-of-the-art systems make use of publicly available materials and/or crowdsource data collection tasks. Both are unavailable in the agriculture applications, where disturbances to the fields must be kept minimal. Climate change makes weather patterns unpredictable. Many plant communities are now experiencing rapid and significant changes in temperature, rainfall, evaporation patterns, and a dramatic increase in the occurrence of extreme events. These changes in temperature and precipitation patterns make the crops susceptible to disease (Burdon and Zhan 2020). Therefore, the amount of monitoring necessary to support farmers and to counteract this variability increases significantly. Currently, regular monitoring of large agricultural fields is performed manually or with the help of drone or satellite imagery, which is then assessed by an agricultural specialist. To automate these processes, we can use AI models to predict yield, to monitor spread of disease, to understand overall vegetation health, to predict crop maturity, and to forecast harvest dates. However, to use deep learning models, we need tools and people to enrich it so we can train, validate, and tune AI models. On the other hand, researchers demonstrate that successful scaling up of computer vision-based models largely depends on data quantity and diversity (Abnar et al. 2021). In the domains such as medicine and earth observation, where we collect data from multi-band sensors, data augmentation is an extremely difficult and resourceconsuming task (Efremova and Erten 2021). Therefore, it’s important to develop methods for cheap and efficient data collection together with AI/EO approaches.
4.2 Economic Outcomes The project was implemented and launched in 2019 as a browsed-based application. Overall, we observed the following outcomes of the project. First, pasture/rangeland customers reduced spending due to economic downturn (COVID-19) and were
AI for Sustainable Agriculture and Rangeland Monitoring
419
reluctant to use the projects that incurred additional costs. At the same time, remote monitoring was essential to continue normal operations and decrease manual monitoring of the land by 50%. At the same time, the access to ground data was also restricted. Despite these restrictions, the pretrained models continued to perform similarly well in the new agricultural season. Second, overgrazing in the next growing season was reduced by analysing overgrazing patterns of the previous season by 10%. To be able to estimate the long-term economic potential of this product, we need further two to three seasons of observations. Finally, the proposed tool (specifically the grassland, shrubland, and forest biomass estimation) was considered useful to estimate the vegetation carbon stock over the conservancy and larger Maasai Mara Region, and we continue conversations with rangeland management on implementing such palpability in the region.
References Abnar, S., M. Dehghani, B. Neyshabur, and H. Sedghi. 2021. Exploring the Limits of Large-Scale Pre-training. arXiv preprint arXiv:2110.02095. Albert, A., J. Kaur, and M.C. Gonzalez. 2017. Using Convolutional Networks and Satellite Imagery to Identify Patterns in Urban Environments at a Large Scale. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1357–1366. Allred, B.W., M.K. Creutzburg, J.C. Carlson, C.J. Cole, C.M. Dovichin, M.C. Duniway, and B. Zhou. 2021. Guiding Principles for Using Satellite-Derived Maps in Rangeland Management. Rangelands. https://doi.org/10.1016/j.rala.2021.09.004. Ash, A., P. Thornton, C.R.S. Stokes, and C. Togtohyn. 2012. Is Proactive Adaptation to Climate Change Necessary in Grazed Rangelands? Rangeland Ecology & Management 65 (6): 563–568. Badinelli, R. 2012. Fuzzy Modelling of Service System Engagements. Service Science 4 (2): 135–146. Bestelmeyer, B.T., Spiegal, S., Winkler, R., James, D., Levi, M., and Williamson, J. 2021. Assessing sustainability goals using big data: collaborative adaptive management in the Malpai Borderlands. Rangeland Ecology & Management, 77, 17–29. Boëtsch, G., P. Duboz, A. Guissé, J.L. Peiry, D. Goffner, A. Niang, C. Diagne, L. Gueye, and P. Sarr. 2017. Climate Change and Desertification in Africa: The Great Green Wall. In Cop 23: Convention-Cadre Des Nations Unies Sur Les Changements Climatiques. Bolo, P.O., R. Sommer, J. Kihara, M. Kinyua, S. Nyawira, and A.M.O. Notenbaert. 2019. Rangeland Degradation: Causes, Consequences, Monitoring Techniques and Remedies. Frontiers in Environmental Science. https://doi.org/10.3389/fenvs.2022.960345. Boone, R.B., R.T. Conant, J. Sircely, P.K. Thornton, and M. Herrero. 2018. Climate Change Impacts on Selected Global Rangeland Ecosystem Services. Global Change Biology 24 (3): 1382–1393. Boyabatlı, O., J. Nasiry, and Y. Zhou. 2019. Crop Planning in Sustainable Agriculture: Dynamic Farmland Allocation in the Presence of Crop Rotation Benefits. Management Science 65 (5): 2060–2076. Breiman, L. 2001. Random Forests. Machine Learning 45 (1): 5–32. Briske, D.D., L.A. Joyce, H.W. Polley, J.R. Brown, K. Wolter, J.A. Morgan, et al. 2015. Climate- Change Adaptation on Rangelands: Linking Regional Exposure with Diverse Adaptive Capacity. Frontiers in Ecology and the Environment 13 (5): 249–256.
420
N. Efremova et al.
Brovelli, M.A., Y. Sun, and V. Yordanov. 2020. Monitoring Forest Change in the Amazon Using Multi-Temporal Remote Sensing Data and Machine Learning Classification on Google Earth Engine. ISPRS International Journal of Geo-Information 9 (10): 580. Brown, J.R., and J. Thorpe. 2008. Climate Change and Rangelands: Responding Rationally to Uncertainty. Rangelands 30 (3): 3–6. Burdon, J.J., and J. Zhan. 2020. Climate Change and Disease in Plant Communities. PLoS Biology 18 (11): e3000949. Burke, M., and D. Lobell. 2017. Satellite-Based Agricultural Yield and Poverty Measures. US Agency for International Development, $1.8 Million, 2017–2020. Chawla, N.V., K.W. Bowyer, L.O. Hall, and W.P. Kegelmeyer. 2002. SMOTE: Synthetic Minority Over-Sampling Technique. Journal of Artificial Intelligence Research 16: 321–357. Chen, L., Liu, L., Qin, S. et al. 2019. Regulation of priming effect by soil organic matter stability over a broad geographic scale. Nat Commun 10, 5112. https://doi.org/10.1038/s41467-019-13119-z Cranmer, M., P. Melchior, and B. Nord. 2021. Unsupervised Resource Allocation with Graph Neural Networks. NeurIPS 2020. DANE. 2016. Use of Satellite Images to Calculate Statistics on Land Cover and Land Use. The Group on Earth Observations. ———. 2017a. Applying Earth Observation Data to Monitor SDGs in Colombia: Towards Integration of National Statistics and Earth Observations for SDG Monitoring in Colombia. IAEG-SDGs Working Group on Geospatial Information: Draft Summary Report. ———. 2017b. Progress and Stride in the Integration of Statistical and Geospatial Information for Sustainable Cities. The Group on Earth Observations. Demir, I., K. Koperski, D. Lindenbaum, G. Pang, J. Huang, S. Basu, F. Hughes, D. Tuia, and R. Raskar. 2018. Deepglobe 2018: A Challenge to Parse the Earth Through Satellite Images. CVPR: 2018 abs/1805.06561. http://arxiv.org/abs/1805.06561. arXiv:1805.06561. Di Stéfano, S., T. Fletcher, V. Jansen, C. Jones, and J.W. Karl. 2020. Rangeland Ecology & Management Highlights. Rangelands 42 (5): 174–177. Efremova, N., and E. Erten. 2021. Biophysical Parameter Estimation Using Earth Observation Data in a Multi-Sensor Data Fusion Approach: CycleGAN. In IEEE International Geoscience and Remote Sensing Symposium. Efremova, N., D. West, and D. Zausaev. 2019. AI-Based Evaluation of the SDGs: The Case of Crop Detection with Earth Observation Data. In AI for Social Good Workshop, ICLR 2019. Eldridge, D.J., R.S. Greene, and C. Dean. 2011. Climate Change Impacts on Soil Processes in Rangelands. In Soil Health and Climate Change, 237–255. Berlin/Heidelberg: Springer. Herrero, M., J. Addison, C. Bedelian, E. Carabine, P. Havlík, B. Henderson, et al. 2016. Climate Change and Pastoralism: Impacts, Consequences and Adaptation. Revue Scientifique et Technique 35: 417–433. Hoffman, T., and C. Vogel. 2008. Climate Change Impacts on African Rangelands. Rangelands 30 (3): 12–17. Holechek, J.L., H.M. Geli, A.F. Cibils, and M.N. Sawalhah. 2020. Climate Change, Rangelands, and Sustainability of Ranching in the Western United States. Sustainability 12 (12): 4942. Houghton, R.A., J.I. House, J. Pongratz, et al. 2012. Carbon Emissions from Land Use and Land- Cover Change. Biogeosciences 9 (12): 5125–5142. https://doi.org/10.5194/bg-9-5125-2012. Iino, S., R. Ito, K. Doi, T. Imaizumi, and S. Hikosaka. 2018. CNN-Based Generation of High- Accuracy Urban Distribution Maps Utilising SAR Satellite Imagery for Short-Term Change Monitoring. International Journal of Image and Data Fusion 9 (4): 302–318. IPCC Climate Change. 2021. 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge/ New York: Cambridge University Press. Jamsranjav, C., M.E. Fernández-Giménez, R.S. Reid, and B. Adya. 2019. Opportunities to Integrate Herders’ Indicators into Formal Rangeland Monitoring: An Example from Mongolia. Ecological Applications 29 (5): e01899.
AI for Sustainable Agriculture and Rangeland Monitoring
421
Jerven, M. 2019. Benefits and Costs of the Data for Development Targets for the Post-2015 Development Agenda. Data for Development Assessment Paper Working Paper. Jones, M.O., Naugle, D. E., Twidwell, D., Uden, D.R., Maestas, J.D., and Allred, B.W. 2020. Beyond inventories: emergence of a new era in rangeland monitoring. Rangeland Ecology & Management, 73(5), 577–583. Kamien, M.I., and N.L. Schwartz. 1991. Dynamic Optimization: The Calculus of Variations and Optimal Control in Economics and Management. 2nd ed, 261. New York: Elsevier. ISBN 978-0-444-01609-6. Kaptué, A.T., L. Prihodko, and N.P. Hanan. 2015. On Regreening and Degradation in Sahelian Watersheds. Proceedings of the National Academy of Sciences 112 (39): 12133–12138. Kornberger, M., D. Pflueger, and J. Mouritsen. 2017. Evaluative Infrastructures: Accounting for Platform Organization. Accounting, Organizations and Society 60: 79–95. Lees, T., G. Tseng, S. Dadson, A. Hernandez, C.G. Atzberger, and S. Reece. 2020. A Machine Learning Pipeline to Predict Vegetation Health, ICLR Workshop on Tackling Climate Change with ML. arXiv:2003.10823. McCollum, D.W., J.A. Tanaka, J.A. Morgan, J.E. Mitchell, W.E. Fox, K.A. Maczko, et al. 2017. Climate Change Effects on Rangelands and Rangeland Management: Affirming the Need for Monitoring. Ecosystem Health and Sustainability 3 (3): e01264. Meshesha, Derege Tsegaye, Muhyadin Mohammed Ahmed, Dahir Yosuf Abdi, and Nigussie Haregeweyn. 2020. Prediction of Grass Biomass from Satellite Imagery in Somali Regional State, Eastern Ethiopia. Heliyon 6 (10): 5272. Miller, P., and M. Power. 2013. Accounting, Organizing, and Economizing: Connecting Accounting Research and Organization Theory. The Academy of Management Annals 7 (1): 557–605. Niamir-Fuller, M., C. Kerven, R. Reid, and E. Milner-Gulland. 2012. Co-existence of Wildlife and Pastoralism on Extensive Rangelands: Competition or Compatibility? Pastoralism Research Policy and Practice 2 (1). https://doi.org/10.1186/2041-7136-2-8. Picardi, A.C., and W.W. Seifert. 1977. A Tragedy of the Commons in the Sahel. Ekistics 43 (258): 297–304. Polley, H.W., D.W. Bailey, R.S. Nowak, and M. Stafford-Smith. 2017. Ecological Consequences of Climate Change on Rangelands. In Rangeland Systems, 229–260. Cham: Springer. Popp, A., N. Blaum, and F. Jeltsch. 2009. Ecohydrological Feedback Mechanisms in Arid Rangelands: Simulating the Impacts of Topography and Land Use. Basic and Applied Ecology 10 (4): 319–329. Ramoelo, Abel, M.A. Cho, R. Mathieu, S. Madonsela, R. van de Kerchove, Z. Kaszta, and E. Wolff. 2015. Monitoring Grass Nutrients and Biomass as Indicators of Rangeland Quality and Quantity Using Random Forest Modelling and WorldView-2 Data. International Journal of Applied Earth Observation and Geoinformation 43: 43–54. Reeves, M.C., K.E. Bagne, and J. Tanaka. 2017. Potential Climate Change Impacts on Four Biophysical Indicators of Cattle Production from Western US Rangelands. Rangeland Ecology & Management 70 (5): 529–539. Riginos, C., L.M. Porensky, K.E. Veblen, W.O. Odadi, R.L. Sensenig, D. Kimuyu, et al. 2012. Lessons on the Relationship Between Livestock Husbandry and Biodiversity from the Kenya Long-Term Exclosure Experiment (KLEE). Pastoralism: Research, Policy and Practice 2 (1): 1–22. Rolnick, D., P.L. Donti, L.H. Kaack, K. Kochanski, A. Lacoste, K. Sankaran, and Y. Bengio. 2019. Tackling Climate Change with Machine Learning. ACM Computing Surveys 55 (2): 1–96. Ronneberger, O., P. Fischer, and T. Brox. 2015. U-net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241. Cham: Springer. Rouse, J.W., R.H. Haas, J.A. Scheel, and D.W. Deering. 1974. Monitoring Vegetation Systems in the Great Plains with ERTS. In Proceedings, 3rd Earth Resource Technology Satellite (ERTS) Symposium, vol. 1, 48–62.
422
N. Efremova et al.
Schuman, G.E., H.H. Janzen, and J.E. Herrick. 2002. Soil Carbon Dynamics and Potential Carbon Sequestration by Rangelands. Environmental Pollution 116 (3): 391–396. Shukla, J., C. Nobre, and P. Sellers. 1990. Amazon Deforestation and Climate Change. Science 247 (4948): 1322–1325. Singh, P., and N. Komodakis. 2018. Cloud-gan: Cloud Removal for Sentinel-2 Imagery Using a Cyclic Consistent Generative Adversarial Networks, 1772–1775. https://doi.org/10.1109/ IGARSS.2018.8519033. Sutskever, I., O. Vinyals, and Q.V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems, 3104–3112. Tucker, C.J., and J.R. Townshend. 2000. Strategies for Monitoring Tropical Deforestation Using Satellite Data. International Journal of Remote Sensing 21 (6–7): 1461–1471. Tyrell, P., S. Russel, and D. Western. 2017. Seasonal Movements of Wildlife and Livestock in a Heterogeneous Pastoral Landscape: Implications for Coexistence and Community Based Conservation. Global Ecology and Conservation 12: 59–72. Weaver, J.E. 1918. The Quadrat Method in Teaching Ecology. The Plant World 21 (11): 267–283. Wedding, L.M., M. Moritsch, G. Verutes, K. Arkema, E. Hartge, J. Reiblich, J. Douglass, S. Taylor, and A.L. Strong. 2021a. Incorporating Blue Carbon Sequestration Benefits into Sub-national Climate Policies. Global Environmental Change 69: 102206. Wedding, L., M. Moritsch, G. Verutes, K. Arkema, E. Hartge, J. Reiblich, and A. Strong. 2021b. Incorporating Blue Carbon Sequestration Benefits into Sub-national Climate Policies. Global Environmental Change 2021: 102206. Werth, D., and R. Avissar. 2002. The Local and Global Effects of Amazon Deforestation. Journal of Geophysical Research-Atmospheres 107 (D20): LBA-55.
Artificial Neural Networks Predict Sustainable Development Goals Index Seyed-Hadi Mirghaderi
Abstract The Sustainable Development Goals Index is an important index for measuring the movements toward sustainable goals. However, many indicators are needed for computing the index. This chapter aims to operationally show that for tackling the problem of the high number of indicators, artificial intelligence techniques may provide contributions. This chapter uses a combination of two famous techniques, including artificial neural networks and genetic algorithms. So, 288 indicators of 127 countries from 7 global reports were extracted, and the collinear and ineffective ones were removed. Finally, 90 indicators remained. A combination of genetic algorithms and artificial neural networks tried to find the best subset of remained indicators that provide a simple system for predicting Sustainable Development Goals Index. The results revealed that artificial neural networks with just four nodes and indicators include “Deaths from infectious diseases,” “ICT use,” “Expenditure on education,” and “Assessment in reading, mathematics, and science” can predict sustainable development index with an accuracy rate of 97%. This chapter also validates the role of innovation in meeting Sustainable Development Goals (SDGs) and uncovers the insignificant role of environmental indicators in the Sustainable Development Goals Index. Keywords Sustainable Development Goals Index · Artificial neural network · Genetic algorithm · Feature selection · Global reports
S.-H. Mirghaderi (*) Department of Management, School of Economics, Management, and Social Sciences, Shiraz University, Shiraz, Iran e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_23
423
424
S.-H. Mirghaderi
1 Introduction Sustainable development (SD) refers to intergenerational equity and aims to optimize the consumption subject to support the needs of future generations (Keeble 1988). SD has three pillars, including environmental, social, and economic, which are interconnected (Brusseau 2019). SD has gradually received tremendous attention from academicians, politicians, business people, and economists (Omri 2020) due to the reveals of urgency in some global environmental issues (Elliott 2012), which lead to the international consensus on 17 Sustainable Development Goals (SDGs) for a better future. The agreement on SDGs was approved by all 193 members of Unite Nations (Sachs et al. 2017) and provides a basis for systematic and coordinated actions to shape a sustainable future in the global village (Costanza et al. 2016). Global goal setting for tackling the world challenges in environmental, social, and economic aspects is the underlying reason for SDGs (Leal Filho 2020). Despite the excellent reason, the progress toward the SDGs is a problematical issue (Xu et al. 2020) that needs to be addressed. Although the UN Statistical Commission has proposed Sustainable Development Goals Index (SDGI), including 230 indicators for assessing the development toward the SDGs (Schmidt-Traub et al. 2017), there are many SDGI measuring problems, such as lack of systematic methods (Xu et al. 2020), lack of valid data (Schmidt-Traub et al. 2017), complicated interrelationship among SDGs (Costanza et al. 2016), and ignoring the uncertainty in SDGs (Ruiz- Morales et al. 2021). Therefore, proposing a simple alternative method for predicting SDGI is valuable for practitioners and academicians. For simplifying the SDGI prediction, we need a small number of suitable indicators selected from a pool of indicators (Hák et al. 2016) presented in global reports. Global reports consist of indicators and indices which aim to pave the way for sustainable development (Shaker 2018). Although there are some indexes for sustainability, it is hard to draw an clear big picture of sustainability through them (Iddrisu and Bhattacharyya 2015). Furthermore, there is no single index that is widely adopted by scientists and politicians (Strezov et al. 2017). However, there is a wide range of indicators with collected data in global reports which attract researchers for reusing them to create sustainable measurement systems; examples of such approach were used by Iddrisu and Bhattacharyya (2015); Strezov et al. (2017); and Shaker (2018). Creating a SD measurement system using this approach needs to address a specific problem, that is, selecting a list of suitable indicators. The indicators must contribute to producing an efficient and noncomplicated SD measurement or prediction system. The selection of indicators (or variables) is a well-known optimization problem in the artificial intelligence (AI) field (Alweshah et al. 2020), which encompass a wide range of proposed methods (George 2000) from statistical techniques (Borah et al. 2014) to heuristic search algorithms (Gnana et al. 2016) to neural networks (Chakraborty 1999). Also, prediction is applicable using several AI techniques
Artificial Neural Networks Predict Sustainable Development Goals Index
425
(Collins and Moons 2019), such as artificial neural networks (ANNs) and genetic algorithm (GA). ANNs are one of the well-known techniques of AI that are inspired by the human brain (Okwu and Tartibu 2021), and GA is a metaheuristic algorithm inspired by the biological evolution of creatures (Mirjalili 2019). It seems that ANN and GA are useful for finding suitable indicators to create a system for predicting the SDGI values. In other words, the problem of too many indicators and hard-to-calculate SDGI may be tackled by using a combination of ANNs and GA. The organization of the remaining parts is as follows. Sections 2, 3, and 4 provide a brief review of SDGI, ANN, and GA, respectively. Section 5 presents the research method and Sect. 6 provides the results of the research. Finally, the conclusion is presented in Sect. 7.
2 Sustainable Development Goals Index (SDGI) In September 2000, 147 developing countries agreed on Millennium Development Goals (MDGs) to prove their commitment against global challenges such as hunger, poverty, disease, shelter-less people, and exclusion while enhancing environmental sustainability, gender equality, and education (Sachs and McArthur 2005). Based on the agreement, they set eight goals for the period between 2000 and 2015. The goals are (1) eradicate extreme poverty and hunger; (2) achieve universal primary education; (3) promote gender equality and empower women; (4) reduce child mortality; (5) improve maternal health; (6) combat HIV/AIDS, malaria, and other diseases; (7) ensure environmental sustainability; and (8) develop a global partnership for development (Kroll 2015). At the expiration time of MDGs, in September 2015, all UN members for the period 2015–2030 agreed on 17 goals (Kroll 2015): (1) no poverty; (2) zero hunger; (3) good health and well-being; (4) quality education; (5) gender equality; (6) clean water and sanitation; (7) affordable and clean energy; (8) decent work and economic growth; (9) industry, innovation, and infrastructure; (10) reduced inequality; (11) sustainable cities and communities; (12) responsible consumption and production; (13) climate action; (14) life below water; (15) life on land; (16) Peace, justice and strong institutions; and (17) partnerships to achieve the goals (UN 2021). SDGs are broader and more complex than MDGs. They are interrelated (Costanza et al. 2016), which cover the environmental, social, and economic aspects of SD (Allen et al. 2019). As Berglund and Gericke (2016) stated, SD as a complicated concept is not measurable unless it is broken down into specific global indicators. As Fig. 1 shows, SDGI has four layers. To measure the SDGI, 169 targets and 232 indicators were developed in 2019 (Barbier and Burgess 2019). But the number of indicators was decreased to 115 in 2020 (Sachs et al. 2020). Although the targets and indicators help monitor the status quo of countries (Alaimo et al. 2021), there are some critics regarding the high amount of indicators, the interrelationship between goals, missing values of indicators, etc.
426 Fig. 1 Pyramid of SDGI. (Source: Reyers et al. 2017)
S.-H. Mirghaderi
Goals Targets Indicators Observations
In recent years, researchers have tried to resolve the critics and propose modifications in SDGI. For example, Xu et al. (2020) proposed a measurement system for quantifying the progress of china in SDGs. The system encompasses 119 indicators divided into 17 SDGs. Horan (2020) introduced a new version of SDGI based on interrelations between targets. It is argued that the new SDGI helps communicate with different stakeholders to undertake an integrated execution method for implementing SDG. Ruiz-Morales et al. (2021) proposed a new way for aggregating the value of each SDG using ordered weighted average (OWA) and prioritized OWA to encompass the uncertainty of SDGs. Bali Swain and Yang-Wallentin (2020) quantified and prioritized SDGs and their relations to SD to provide suggestions for countries to improve their SDGI by focusing on different aspects of SD.
3 Artificial Neural Networks (ANNs) A significant part of artificial intelligence is ANNs (Wu and Feng 2018) which attract much attention from the 1980s (Wu and Feng 2018). The idea of ANNs was inspired by nervous system biology in the human body, which consists of a network of neurons named neural network. The network is an interconnected web of tremendous neurons which parallel process the collected data (Mishra and Srivastava 2014) to solve a specific problem (Abiodun et al. 2018), especially when the network is dense as in a human brain. In the brain, chemical reactions produce signals which play an essential role in controlling brain activities and creating a basis for learning (Russell and Norvig 2021). Based on a hypothesis, the learning process occurs at the connection points of two neurons when the connection intensity differs (Wu and Feng 2018). Scientific attempts for modeling nervous system operation by mathematical formulation resulted in ANNs (Sivanandam and Deepa 2006). Although ANNs try to imitate the brain function, it has not been approached to capture the brain complexity. But there are two significant similarities between the brain and ANNs; both are constructed from highly interconnected simple computational elements (neurons), and the network function is determined by neurons connections (Hagan et al. 2016). In ANNs, each connection between neurons is denoted by a number named weight
Artificial Neural Networks Predict Sustainable Development Goals Index
427
Fig. 2 Simple Neuron in ANNs. (Source: Aggarwal 2018)
(Wang 2003). The weight scales each input to a neuron and affects the function inside the neuron (Fig. 2) (Aggarwal 2018). The weights are dynamically adjusted based on processing the specific inputs and the difference between actual and desired output (Floridi 2002). The weight updating process is the essence of learning (Ding et al. 2013) which can uncover the patterns in data and predict outputs often better than many statistical tools (Paliwal and Kumar 2009). Due to the capability of ANNs in solving the problems such as clustering, pattern recognition, and prediction in nonlinear and complex systems, the application of ANNs has expanded in many disciplines such as engineering, medicine, agriculture, mining, business, finance, arts, technology, etc. (Abiodun et al. 2018). In general, ANNs succeeded in providing high accuracy results for the problems in many disciplines (Gue et al. 2020). Similar to other disciplines, sustainability has also taken advantage of ANNs. For example, Antanasijević et al. (2013) developed a model for predicting PM10 emissions at the national level. Gue et al. (2020) performed a critical review on utilizing ANNs in contributing SD. The study revealed that SDGs 6, 7, 11, and 12 have used more of ANNs. Also, the utilization includes modeling and predicting. Emmanuel et al. (2020) proposed a design of the neural network-based system for predicting the first six SDGs in less developed countries using patterns in big data.
4 Genetic Algorithm (GA) GA was introduced by John Holland in the 1960s as an optimization algorithm. It was inspired by evolution in nature (Moriarity 2021). Evolution, as Charles Darwin (1859) discovered, is based on “survival of the fittest”; that means adapted creatures to the environment survive more rather than others. The fittest creature will have a higher chance to live and reproduce the next generation (Badar 2021), while the unfitted ones have less chance. The survival of the best is the principle of the evolution process (Sivanandam and Deepa 2008). As Kramer (2017) stated, evolution is a fruitful optimization process that can be seen in creatures. They utilize evolution- based strategies to produce near-optimal solutions for solving complicated problems (Moriarity 2021).
428
S.-H. Mirghaderi
Fig. 3 GA procedure. (Source: Badar 2021)
GA uses a simulated evolution process to find near-optimal solutions (Badar 2021) in an iterative process through three biological-inspired operators named selection, crossover, and mutation (Katoch et al. 2021). Selection refers to choosing a certain number of current solutions for producing the next generation. Crossover means creating new solutions by combining existing solutions. The mutation is used to generate a different solution by manipulating the current solution. The selection operator has several methods, i.e., elite replacement (copy the best solution to the next generation as it is) and roulette wheel selection (selecting based on the probabilities related to the fitness function, i.e., the better solution has more chance to select) (Badar 2021). A technique for implementing crossover is the random respectful crossover which preserves the similarity of current solutions and randomly selects different points to create new solutions (Umbarkar and Sheth 2015). Mutation techniques try to explore the search space and increase the diversity of solutions (Moriarity 2021). It is implemented using methods such as randomly selecting a solution and changing a random point of it. The procedure of GA is presented in Fig. 3. GA is a metaheuristic search algorithm that is flexible and attractive with many applications (Kramer 2017). Due to this capability, GA is the most implemented and researched metaheuristic with vast related published variants (Badar 2021). Nowadays, GA is a part of many applications in the artificial intelligence field (Moriarity 2021) to create methods that mimic and even do better than human intelligence (Kramer 2017).
Artificial Neural Networks Predict Sustainable Development Goals Index
429
5 Method This chapter aims to create a simple model for predicting SDGI based on ANNs. To this end, a reverse pyramid method was used by following six steps include: • • • • • •
Step 1: data gathering from the seven related global reports Step 2: data cleaning Step 3: handling missing values Step 4: handling collinear indicators Step 5: removing ineffective indicators Step 6: finding the best combination of indicators
By following the introduced steps, the research activities were conducted. The details of each step are presented in the following subsections. • Step 1: data gathering Some official and open-source reports are needed to create a pool of indicators. The best sources of indicators and their values are global reports. Table 1 shows the information of reports that are used in forming the required indicator pool. The underlying logic of selecting reports is the relationship of the report to the triple bottom line of SD. It is expected that each report reflects at least one of the sustainable development pillars; for example, EPI is related to the environmental pillar, while HDI, PF, and SPI are more related to the social pillar and EF and DB refer to the economic pillar. It is assumed that GII can be related to all pillars. Due to the research process, if the abovementioned assumptions are not correct, it cannot negatively affect the research results. Also, the way for more research is open by selecting other or more reports. • Step 2: data cleaning The reports generally provide information based on a hierarchal structure of variables. They compact operational indicators to create high-level ones. Based on the goal of this research, the operational indicators were collected from each report. In
Table 1 Selected reports for data extraction Report Environmental Performance Index (EPI) Human Development Index (HDI) Personal freedom (PF) Social Progress Index (SPI) Economic Freedom (EF) Doing Business (DB) Global Innovation Index (GII)
Source https://epi.yale.edu/downloads/epi2020report20210112.pdf http://hdr.undp.org/en/2020-report www.cato.org/human-freedom-index/2020 www.socialprogress.org/index/global/results www.heritage.org/index/download www.doingbusiness.org/en/reports/global-reports/ doing-business-2020 www.globalinnovationindex.org/analysis-indicator
430
S.-H. Mirghaderi
sum, 288 indicators were extracted from the reports. Table 2 shows the number of extracted indicators. There is an operational indicator in GII which reflects the overall result of EPI. To have more homogenous indicators, this indicator was removed from the list. Also, only 127 countries were covered in all the mentioned reports; therefore, just their information was extracted from the publishing reports for the year 2020 and was organized in a database. • Step 3: handling missing values Approximately 1 percent of the database was not filled due to lacking information in the reports. In other words, there were missing values in the database. By using the global closest fit approach, the missing values of countries were replaced by the most similar country using Manhattan distance criteria: dij cik c jk
kS
where i and j are denoted for two countries, S represents a set of non-missing indicators in country i and j, and ck is denoted for kth indicator. All missing values are filled in using the mentioned method. Finding the most similar country for a country with missing value was a repetitive process. That is, after filling each missing value, the most similar country for the next missing value was found based on the sum of Manhattan distance between the country and other countries. The country with the minimum sum of distances is the similar one in which the missing value was filled by the indicator value of the similar country. • Step 4: removing collinear indicators The variance inflation factor (VIF) is a measure for finding collinear variables. Based on Algorithm 1, the indicators with higher VIF are iteratively and step-by- step removed. The remaining indicators have lower VIF and then are not collinear. Algorithm 1: Removing Collinear Indicators 1: 2: 3: 4: 5: 6: 7:
Input data of 288 indicators Calculate the VIF of each indicator While max(VIF) ≥ 5 Remove vector of the indicator with maximum VIF Recalculate the VIF of each indicator End Show remained indicators
VIF is computed using the following formula: VIFi
1 1 Ri2
Artificial Neural Networks Predict Sustainable Development Goals Index
431
Table 2 Number of indicators extracted from each report Report Environmental Performance Index (EPI) Human Development Index (HDI) Personal Freedom (PF) Social Progress Index (SPI) Economic Freedom (EF) Doing Business (DB) Global Innovation Index (GII) Total
Number of operational indicators 32 4 34 50 42 47 79 288
where i is denoted for a selected indicator and Ri2 represents the coefficient of determination for the indicator i. The higher the VIF value represents the more collinearity. As Larose (2015) acknowledged if VIFi ≥ 5,then the collinearity is moderate. Therefore, to avoid collinearity, we can remove the indicators with the VIF greater than 5 as mentioned in Algorithm 1. Applying the Algorithm caused to finding 135 collinear indicators, then the total number of remaining indicators decreased from 288 to 153. • Step 5: removing ineffective indicators Some indicators are not effective for participation in predicting SDGI. Therefore, just indicators must be used as input variables which can play an essential role in predicting SDGI by improving the performance of ANNs. The problem of finding the best subset of indicators in this research is an instance of a well-known typical problem in the literature named “feature selection” or “variable selection.” There are several methods for producing solutions to the variable selection problem. But De et al. (1997) propose an ANN-based method that uses feature quality index (FQI) as a criterion for ranking variables. The underlying logic of the method is attractive and straightforward; if a variable is not essential, removing it must not harm the result of the network. In other words, if the presence of a variable does not result in better performance, the variable is ineffective and must be removed. Algorithm 2 was designed based on the mentioned logic. It compares the mean
Algorithm 2: Pseudocode of Removing Ineffective Indicators 1: Final_List = Ø 2: For i = 1 to 300 3: List = {all remained indicators} 4: While List has no change do: 5: Randomly partition indicators to contain 20 indicators in each sub-set 6: For each sub-set
432
7: 8: 9:
S.-H. Mirghaderi
Run ANN and save MSE For j = 1 to 20 Put a vector of zero instead of indicator j in the sub-set Run ANN and save MSE_without_j IF MSE_without_j ≤ MSE Remove indicator j from the List End End
10: 11: 12: 13: 14: 15: End 16: End 17: Add List to Final_List and make it unique 18: End 19: Remove duplicates from Final_List
square error (MSE) of an ANN output when a specific variable is present and when its values are replaced by a vector of zero. To remove all ineffective variables, Algorithm 2 repetitively ran, while the input indicators were the remaining indicators of the previous run. Figure 4 shows the results of ten runs of the Algorithm. Finally, 63 ineffective indicators were found. Therefore, the number of final indicators decreased from 153 to 90. • Step 6: finding the best combination of indicators
Fig. 4 Reduction of indicators using Algorithm 2
Artificial Neural Networks Predict Sustainable Development Goals Index
433
Although 90 indicators are effective in predicting SDGI, a simple predicting system must have a small number of input variables, while being capable of predicting the target values with a reasonable error. Therefore, it is necessary to select a subset of indicators that play the role of inputs for ANNs. It is expected that a simple ANNs design must have limited nodes. In this research, the limitation of nods is set to 20, that is, the number of nodes in ANNs is equal to or less than 20. Testing from 1 to 20 nodes in ANNs may help to decide about the best number of nodes. It implies that combinations of 1 to 20 from 90 indicators must be tested. The total number of combinations is more than 7 × 1019. The number of combinations is huge, and testing all of them is an energy- and time-consuming activity, while a good local solution may meet the need. Therefore, instead of testing all combinations, a genetic algorithm (GA) is used to find a reasonable solution. The GA is embedded in a repetitive ANN algorithm. Algorithm 3 shows this approach in more detail. Algorithm 3: Pseudocode of Combination of ANN and GA 1: Input data of 90 indicators of 127 countries 2: Set parameters of GA such as number of generations, selection rate, crossover rate, and mutation rate 3: For N = 1 to 20 //N denote for the number of nodes in ANN// 4: Generate a population of set-indicators (each set- indicator consist of L indicators) 5: For i = 1 to number of generation 6: For k = 1 to number of population 7: For r = 1 to 11 8: Run ANN with N nodes using kth set-indicator in population as input 9: Save RMSE, MAPD, and CorrelCoeff of each ANN in Performance(r) 10: End 11: P(k) = median(Performance) 12: End 13: Sort the population by RMSE in P and save the Best set-indicator 14: Apply selection operator to form a part of new_population 15: Apply crossover operator to form another part of new_ population 16: Apply mutation operator to form the final part of new_ population 17: population = new_ population 18: End 19: Show and save Best set-indicator and related Performance for the Node = N 20: End
434
S.-H. Mirghaderi
Fig. 5 Convergence plot of GA
Fig. 6 Performance of ANN with different number of nodes
The GA used in this research encompasses 200 generations with 50 solutions in each generation. The elite replacement, crossover, and mutation rate are set to 0.1, 0.5, and 0.4, respectively. The fitness function is the root mean square error (RMSE) of related ANN. To ensure the robustness of the algorithm output, the ANN ran 11 times, and the median of the RMSEs was reckoned as the value of the fitness
Artificial Neural Networks Predict Sustainable Development Goals Index
435
function. The selection operator was the roulette wheel, and the crossover method was the random respectful technique. For crossover, three ways were designed: (1) random selection from unused indicators in a selected solution, (2) random selection from indicators that have not emerged in the current solutions, and (3) randomly replacing an indicator in the current selection with a new one. Figure 5 shows the convergence plot of the GA for an ANN with four nodes (indicators). For simplicity, the iteration is limited to 50. The result of running Algorithm 3 is shown in Fig. 6. The figure reveals that with only four nodes, the correlation between the predicted SDGI and real SDGI is more than 0.95, and on average, there is less than 3% error in predicting the SDGI of each country.
6 Results The results revealed that among 288 indicators extracted from the selected global reports, just 90 indicators are helpful for predicting SDGI using ANNs. Although more indicators provide better prediction, to keep the simplicity, an ANN with four nodes in one hidden layer can predict SDGI with high accuracy. In the ANN, each node is related to one indicator. The most suitable indicators for predicting SDGI are “Deaths from infectious diseases,” “ICT use,” “Expenditure on education,” and “Assessment in reading, mathematics, and science.” Using these indicators, the ANN can forecast the SDGI with mean absolute percentage deviation (MAPD) equals 2.9126%, RMSE equals 2.4763, and the correlation between the predicted values and SDGI is 0.9592. The results show that designed ANN is a successful predictor for SDGI. Other combinations of the indicators are also able to predict the SDGI. Table 3 represents some of the combinations. Although the higher the number of nodes produces better performance, the complication of ANN will also increase by adding more nodes. Table 3 shows that many indicators belong to the Global Innovation Index report. It implies the role of innovation in facilitating the movement toward SDGs and increasing the value of SDGI for countries. Another astonishing fact in the table is the poor emergence of indicators from the EPI, which reports the environmental status. When we can predict SDGI without indicators from the environmental aspect, it means that maybe there is a bias in SDGI. The bias may be occurred due to the insufficient attention to environmental goals in calculating SDGI or undermining the environmental issues in profit of social and or economic issues. This is an interesting topic for further research.
436
S.-H. Mirghaderi
Table 3 Input(s) and performance of ANN Number of nodes 1 2 3
4
5
6
7
8
Indicators ICT use Deaths from infectious diseases GERD performed by business enterprise Deaths from infectious diseases ICT use Assessment in reading, mathematics, and science Deaths from infectious diseases ICT use Expenditure on education Assessment in reading, mathematics, and science Deaths from infectious diseases ICT use Expenditure on education Assessment in reading, mathematics, and science Judicial independence Deaths from infectious diseases ICT use Expenditure on education Assessment in reading, mathematics, and science Venture capital deals Utility model applications by origin Deaths from infectious diseases ICT use Expenditure on education Assessment in reading, mathematics, and science Venture capital deals SNM.new Hiring and firing regulations Deaths from infectious diseases ICT use Child stunting Expenditure on education Assessment in reading, mathematics, and science Patent applications by origin ISO 14001 environmental certificates ICT services imports
Source GII SPI GII SPI GII GII
RMSI MAPD Correlation 4.006 4.5694 0.8860 3.2868 3.8320 0.9253
SPI GII GII GII
2.4761 2.9126 0.9592
SPI GII GII GII EF
2.3176 2.6337 0.9638
SPI GII GII GII GII GII
2.1393 2.5092 0.9689
SPI GII GII GII GII EPI EF
2.1757 2.4433 0.9680
SPI GII SPI GII GII GII GII GII
2.1170 2.2160 0.9699
2.7884 3.2222 0.9469
(continued)
Artificial Neural Networks Predict Sustainable Development Goals Index
437
Table 3 (continued) Number of nodes 9
10
Indicators Deaths from infectious diseases ICT use Child stunting Expenditure on education Assessment in reading, mathematics, and science Government investment Women’s Movement Political and operational stability FGT.new Deaths from infectious diseases ICT use Expenditure on education Assessment in reading, mathematics, and science Women with advanced education ISO 9001 quality certificates ICT services imports Access to foreign newspapers Paying taxes-time (hours) Employment in knowledge-intensive services
Source RMSI MAPD Correlation 2.0407 2.1835 0.9729 SPI GII SPI GII GII EF PF GII EPI SPI GII GII GII SPI GII GII PF DB GII
1.9536 2.0590 0.9756
7 Conclusions This chapter explored seven global indexes, including Environmental Performance Index (EPI), Doing Business (DB), Global Innovation Index (GII), Economic Freedom (EF), Personal Freedom (PF), Social Progress Index (SPI), and Human Development Index (HDI). The indexes provide 288 operational indicators from the social, economic, and environmental aspects of 127 countries. The collinear and ineffective indicators were removed in two separate steps. From the 90 remaining indicators, artificial neural networks (ANNs) could yield outstanding results using just a combination of four indicators include “Deaths from infectious diseases,” “ICT use,” “Expenditure on education,” and “Assessment in reading, mathematics, and science.” The designed ANN creates a simple model for predicting Sustainable Development Goals Index (SDGI) and avoids the complicated computation of many indicators. This research also uncovered two facts behind SDGI. First, GII indicators play a prominent role in predicting SDGI. This finding can validate the role of innovation in meeting SDGs and propose to search for solutions to sustainable development problems through innovation. Second, the role of environmental indicators in calculating SDGI is neglectable. Because we succeed in predicting SDGI while ignoring environmental indicators, the SDGI is not relying on environmental indicators, or
438
S.-H. Mirghaderi
maybe the role of other aspects is bolder than the environmental aspect. Clarifying the bias in SDGI needs more research. This research also opens the door for using other global reports and indicators to develop another prediction system for SDGI to measure the progress toward SGDs.
References Abiodun, Oludare Isaac, Aman Jantan, Abiodun Esther Omolara, Kemi Victoria Dada, Nachaat AbdElatif Mohamed, and Humaira Arshad. 2018. State-of-the-Art in Artificial Neural Network Applications: A Survey. Heliyon 4 (11): e00938. https://doi.org/10.1016/j.heliyon.2018.e00938. Aggarwal, Charu C. 2018. Neural Networks and Deep Learning. Springer 10: 978–973. Alaimo, Leonardo Salvatore, Alberto Arcagni, Marco Fattore, and Filomena Maggino. 2021. Synthesis of Multi-indicator System Over Time: A Poset-based Approach. Social Indicators Research 157 (1): 77–99. https://doi.org/10.1007/s11205-020-02398-5. Allen, Cameron, Graciela Metternicht, and Thomas Wiedmann. 2019. Prioritising SDG Targets: Assessing Baselines, Gaps and Interlinkages. Sustainability Science 14 (2): 421–438. https:// doi.org/10.1007/s11625-018-0596-8. Alweshah, Mohammed, Saleh Al Khalaileh, Brij B. Gupta, Ammar Almomani, Abdelaziz I. Hammouri, and Mohammed Azmi Al-Betar. 2020. The Monarch Butterfly Optimization Algorithm for Solving Feature Selection Problems. Neural Computing and Applications. https://doi.org/10.1007/s00521-020-05210-0. Antanasijević, Davor Z., Viktor V. Pocajt, Dragan S. Povrenović, Mirjana Đ. Ristić, and Aleksandra A. Perić-Grujić. 2013. PM10 Emission Forecasting Using Artificial Neural Networks and Genetic Algorithm Input Variable Optimization. Science of the Total Environment 443: 511–519. https://doi.org/10.1016/j.scitotenv.2012.10.110. Badar, Altaf QH. 2021. Evolutionary Optimization Algorithms. Bali Swain, R., and F. Yang-Wallentin. 2020. Achieving Sustainable Development Goals: Predicaments and Strategies. International Journal of Sustainable Development & World Ecology 27 (2): 96–106. https://doi.org/10.1080/13504509.2019.1692316. Barbier, Edward B., and Joanne C. Burgess. 2019. Sustainable Development Goal Indicators: Analyzing Trade-Offs and Complementarities. World Development 122: 295–305. https://doi. org/10.1016/j.worlddev.2019.05.026. Berglund, Teresa, and Niklas Gericke. 2016. Separated and Integrated Perspectives on Environmental, Economic, and Social Dimensions – An Investigation of Student Views on Sustainable Development. Environmental Education Research 22 (8): 1115–1138. https://doi. org/10.1080/13504622.2015.1063589. Borah, Pallabi, Hasin A. Ahmed, and Dhruba K. Bhattacharyya. 2014. A Statistical Feature Selection Technique. Network Modeling Analysis in Health Informatics and Bioinformatics 3 (1): 55. https://doi.org/10.1007/s13721-014-0055-0. Brusseau, M.L. 2019. Chapter 32 – Sustainable Development and Other Solutions to Pollution and Global Change. In Environmental and Pollution Science, ed. Mark L. Brusseau, Ian L. Pepper, and Charles P. Gerba, 3rd ed., 585–603. Academic Press. Chakraborty, Basabi. 1999. Feature Selection by Artificial Neural Network for Pattern Classification. In Pattern Recognition in Soft Computing Paradigm, 95–109. Collins, Gary S., and Karel G.M. Moons. 2019. Reporting of Artificial Intelligence Prediction Models. The Lancet 393 (10181): 1577–1579. https://doi.org/10.1016/S0140-6736(19)30037-6. Costanza, Robert, Lew Daly, Lorenzo Fioramonti, Enrico Giovannini, Ida Kubiszewski, Lars Fogh Mortensen, Kate E. Pickett, Kristin Vala Ragnarsdottir, Roberto De Vogli, and Richard Wilkinson. 2016. Modelling and Measuring Sustainable Wellbeing in Connection With the UN Sustainable Development Goals. Ecological Economics 130: 350–355. https://doi. org/10.1016/j.ecolecon.2016.07.009.
Artificial Neural Networks Predict Sustainable Development Goals Index
439
De, Rajat K., Nikhil R. Pal, and Sankar K. Pal. 1997. Feature Analysis: Neural Network and Fuzzy Set Theoretic Approaches. Pattern Recognition 30 (10): 1579–1590. https://doi.org/10.1016/ S0031-3203(96)00190-2. Ding, Shifei, Hui Li, Su Chunyang, Yu Junzhao, and Fengxiang Jin. 2013. Evolutionary Artificial Neural Networks: A Review. Artificial Intelligence Review 39 (3): 251–260. https://doi. org/10.1007/s10462-011-9270-6. Elliott, Jennifer. 2012. An Introduction to Sustainable Development. Routledge. Emmanuel, Okewu, M. Ananya, Sanjay Misra, and Murat Koyuncu. 2020. A Deep Neural Network-Based Advisory Framework for Attainment of Sustainable Development Goals 1-6. Sustainability 12 (24): 10524. https://doi.org/10.3390/su122410524. Floridi, Luciano. 2002. Philosophy and computing: An introduction. Routledge. George, Edward I. 2000. The Variable Selection Problem. Journal of the American Statistical Association 95 (452): 1304–1308. https://doi.org/10.1080/01621459.2000.10474336. Gnana, D. Asir, S. Appavu Antony, Alias Balamurugan, E. Jebamalar, and Leavline. 2016. Literature Review on Feature Selection Methods for High-Dimensional Data. International Journal of Computer Applications 136 (1): 9–17. Gue, Ivan Henderson V., Aristotle T. Ubando, Ming-Lang Tseng, and Raymond R. Tan. 2020. Artificial Neural Networks for Sustainable Development: A Critical Review. Clean Technologies and Environmental Policy 22 (7): 1449–1465. https://doi.org/10.1007/s10098-020-01883-2. Hagan, M.T., H. Demuth, M. Beale, and O. De Jesus. 2016. Neural Network Design. 2nd ed. Lexington. Hák, Tomáš, Svatava Janoušková, and Bedřich Moldan. 2016. Sustainable Development Goals: A Need for Relevant Indicators. Ecological Indicators 60: 565–573. https://doi.org/10.1016/j. ecolind.2015.08.003. Horan, David. 2020. National Baselines for Integrated Implementation of an Environmental Sustainable Development Goal Assessed in a New Integrated SDG Index. Sustainability 12 (17): 6955. https://doi.org/10.3390/su12176955. Iddrisu, Insah, and Subhes C. Bhattacharyya. 2015. Sustainable Energy Development Index: A Multi-Dimensional Indicator For Measuring Sustainable Energy Development. Renewable and Sustainable Energy Reviews 50: 513–530. https://doi.org/10.1016/j.rser.2015.05.032. Katoch, Sourabh, Sumit Singh Chauhan, and Vijay Kumar. 2021. A Review on Genetic Algorithm: Past, Present, and Future. Multimedia Tools and Applications 80 (5): 8091–8126. https://doi. org/10.1007/s11042-020-10139-6. Keeble, Brian R. 1988. The Brundtland Report: ‘Our Common Future’. Medicine and War 4 (1): 17–25. https://doi.org/10.1080/07488008808408783. Kramer, Oliver. 2017. Genetic Algorithms. In Genetic Algorithm Essentials, 11–19. Springer. Kroll, Christian. 2015. Sustainable Development Goals: Are the rich countries ready? Bertelsmann Stiftung Gütersloh, Germany. Larose, Daniel T. 2015. Data Mining and Predictive Analytics. Wiley. Leal Filho, Walter. 2020. Viewpoint: Accelerating the Implementation of the SDGs. International Journal of Sustainability in Higher Education 21 (3): 507–511. https://doi.org/10.1108/ IJSHE-01-2020-0011. Mirjalili, Seyedali. 2019. Genetic Algorithm. In Evolutionary Algorithms and Neural Networks: Theory and Applications, 43–55. Cham: Springer. Mishra, M., and M. Srivastava. 2014. A View of Artificial Neural Network. 2014 International Conference on Advances in Engineering & Technology Research (ICAETR – 2014), 1–2 Aug. 2014. Moriarity, Sean. 2021. Genetic Algorithms in Elixir: Solve Problems Using Evolution. Pragmatic Bookshelf. Okwu, Modestus O., and Lagouge K. Tartibu. 2021. Artificial Neural Network. In Metaheuristic Optimization: Nature-Inspired Algorithms Swarm and Computational Intelligence, Theory and Applications, 133–145. Cham: Springer International Publishing.
440
S.-H. Mirghaderi
Omri, Anis. 2020. Technological Innovation and Sustainable Development : Does the Stage of Development Matter? Environmental Impact Assessment Review 83: 106398. https://doi. org/10.1016/j.eiar.2020.106398. Paliwal, Mukta, and Usha A. Kumar. 2009. Neural Networks and Statistical Techniques: A Review of Applications. Expert Systems with Applications 36 (1): 2–17. https://doi.org/10.1016/j. eswa.2007.10.005. Reyers, Belinda, Mark Stafford-Smith, Karl-Heinz Erb, Robert J. Scholes, and Odirilwe Selomane. 2017. Essential Variables help to focus Sustainable Development Goals monitoring. Current Opinion in Environmental Sustainability 26-27: 97–105. https://doi.org/10.1016/j. cosust.2017.05.003. Ruiz-Morales, Betzabe, Irma Cristina Espitia-Moreno, Victor G. Alfaro-Garcia, and Ernesto Leon-Castro. 2021. Sustainable Development Goals Analysis with Ordered Weighted Average Operators. Sustainability 13 (9): 5240. https://doi.org/10.3390/su13095240. Russell, Stuart, and Peter Norvig. 2021. Artificial Intelligence: A Modern Approach. 4th ed. Pearson. Sachs, J.D., and J.W. McArthur. 2005. The Millennium Project: A Plan For Meeting the Millennium Development Goals. The Lancet 365 (9456): 347–353. https://doi.org/10.1016/ S0140-6736(05)17791-5. Sachs, Jeffrey D, Guido Schmidt-Traub, Christian Kroll, Guillaume Lafortune, Grayson Fuller, and Finn Woelm. 2020. Sustainable Development Report 2020. Sachs, Jeffrey, Guido Schmidt-Traub, Christian Kroll, David Durand-Delacre, and Katerina Teksoz. 2017. SDG Index and Dashboards Report 2017. New York: Bertelsmann Stiftung and Sustainable Development Solutions Network (SDSN). Schmidt-Traub, Guido, Christian Kroll, Katerina Teksoz, David Durand-Delacre, and Jeffrey D. Sachs. 2017. National baselines for the Sustainable Development Goals assessed in the SDG Index and Dashboards. Nature Geoscience 10 (8): 547–555. https://doi.org/10.1038/ngeo2985. Shaker, Richard Ross. 2018. A Mega-Index for the Americas and Its Underlying Sustainable Development Correlations. Ecological Indicators 89: 466–479. https://doi.org/10.1016/j. ecolind.2018.01.050. Sivanandam, S.N., and S.N. Deepa. 2008. Genetic Algorithms. In Introduction to Genetic Algorithms, 15–37. Berlin, Heidelberg: Springer Berlin Heidelberg. ———. 2006. Introduction to Neural Networks Using Matlab 6.0. Tata McGraw-Hill Education. Strezov, Vladimir, Annette Evans, and Tim J. Evans. 2017. Assessment of the Economic, Social and Environmental Dimensions of the Indicators for Sustainable Development. Sustainable Development 25 (3): 242–253. https://doi.org/10.1002/sd.1649. Umbarkar, Anant J., and Pranali D. Sheth. 2015. Crossover Operators in Genetic Algorithms: A Review. ICTACT Journal on Soft Computing 6 (1). UN. 2021. Envision2030: 17 Goals to Transform the World for Persons with Disabilities. Accessed 13 Oct 2021. https://www.un.org/development/desa/disabilities/envision2030.html. Wu, Yu-chen, and Jun-wen Feng. 2018. Development and Application of Artificial Neural Network. Wireless Personal Communications 102 (2): 1645–1656. https://doi.org/10.1007/ s11277-017-5224-x. Xu, Zhenci, Sophia N. Chau, Xiuzhi Chen, Jian Zhang, Yingjie Li, Thomas Dietz, Jinyan Wang, Julie A. Winkler, Fan Fan, Baorong Huang, Shuxin Li, Wu Shaohua, Anna Herzberger, Ying Tang, Dequ Hong, Yunkai Li, and Jianguo Liu. 2020. Assessing Progress Towards Sustainable Development Over Space and Time. Nature 577 (7788): 74–78. https://doi.org/10.1038/ s41586-019-1846-3.
Sailing the Data Sea to Advance Research on the Sustainable Development Goals Andy Spezzatti, Elham Kheradmand, Kartik Gupta, Marie Peras, and Roxaneh Zaminpeyma
Abstract The Sustainable Development Goals (SDGs) are the framework adopted by the global community to encourage taking actions on the multiple challenges facing the world today to ensure environmental protection, health and well-being, and economic prosperity. This framework provides a detailed list of indicators that are interconnected and cover a holistic view on sustainable development. The goals were defined by the United Nations General Assembly in 2015 and expected to be achieved by 2030. Since the release of this agenda, the research community has begun to intensify work in these areas, yet these efforts seem to be relatively limited. This is especially true about the employment of data and artificial intelligence (AI), which are not widely engaged in SDG-related topics. The AI-based research on SDGs and further developments depends heavily on the availability and accessibility of related real-world data collected by the community. However, there is no central, structured, and holistic database of datasets and metadata associated with the SDGs, which prevents large-scale collaboration on these topics. In this paper, we present the SDG Data Catalog, a global open-source database indexing SDG- related datasets, associated metadata, and research networks. We describe the construction of this catalog, which relies on state-of-the-art natural language processing A. Spezzatti (*) AI for Good Foundation, Geneva, Switzerland e-mail: [email protected] E. Kheradmand University of Montreal, Montreal, Canada e-mail: [email protected] K. Gupta University of Western Ontario, London, Canada e-mail: [email protected] M. Peras AgroParisTech, Paris, France e-mail: [email protected] R. Zaminpeyma McGill University, Montreal, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_24
441
442
A. Spezzatti et al.
models with human supervision. The catalog breaks down data silos and helps sustainability researchers navigate the data sea to initiate effective collaborations. Keywords Natural language processing · Open data · Sustainable development · Natural language understanding · Named entity recognition
1 Introduction In the summer of 2021, the Intergovernmental Panel on Climate Change (IPCC) released its latest report on climate change (Masson-Delmotte et al. 2021). It concludes with a high degree of confidence that human influences are causally connected to recent warming observations and climate variability. It also provides a good description of what we can expect over the next few decades if we follow current trends: rising sea levels, more frequent floods and droughts, and decreasing Arctic ice. The report demonstrates the need to change the way our society operates and the urge to curb emissions and mitigate the negative outcomes that are currently projected. It also offers a big challenge to the world as a lot of these changes may not be reversible and may occur sooner than expected; hence, there is an urgent need for a global collaboration on sustainable development. On the other hand, the adoption of Sustainable Development Goals (SDGs) by the United Nations in 2015 caused a significant growth in the number of scientific publications on sustainable development. As a result, more than 31% of research outputs on this topic over the period of 2000 to 2017 have been published after 2015 (Bautista-Puig et al. 2021). The United Nations’ 17 SDGs are implemented and organized into different categories to progress toward global sustainable plans, policies, and future.1 All SDG categories tackle world issues such as ending poverty, ensuring gender equality, and climate change, and they all work in conjunction with other SDGs. Artificial intelligence (AI) has been a growing research field in the last decades, impacting and shaping an increasing number of industries, and actively used by the global research community. According to the Artificial Intelligence Index Report 2021 (Zhang et al. 2021), the number of AI publications grew 34.5% from 2019 to 2020 (vs 19.6% the year before) (Fig. 1). The trend is visible in every region of the world, and in 2020, the COVID-19 crisis demonstrated how the AI research community can be mobilized on an important topic. Despite these trends, the AI community only marginally contributes to the sustainable development research, and from the IPCC report that referenced scientific papers from several decades, very few of them were using AI infrastructure. Using a consensus-based expert
The 17 GOALS | Sustainable Development. Retrieved October 27, 2021 from https://sdgs.un. org/goals 1
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
443
Fig. 1 Percent of ArXiv publications related to artificial intelligence, between 2004 and 2018. (Zhang et al. 2021)
elicitation process, a group of researchers evaluated the impact AI can have on all 17 global goals (Vinuesa et al. 2020). Their conclusion shows that for a majority of the 169 targets (79%), AI may act as an enabler, while a smaller number of them (35%) may also experience negative impact from the development of AI. For some targets, AI can be both beneficial and detrimental, as is the case for Target 1.1,2 for which AI will help better identify places of poverty, but at the same time, it can automate some of low-skilled jobs and increase existing inequalities. Although AI cannot solve all problems and poses certain risks and challenges, if accompanied by a set of common principles and regulations, it could substantially aid in achieving the SDGs and transform our capacity to counteract negative patterns that may become irreversible without prompt action. Nowadays, a major barrier to cross-industry collaboration is the lack of easy access to many datasets that are essential for solving society’s current challenges. In the research community, it is appreciated to make the data publicly available, yet this is not a common practice. The availability of data not only helps other researchers create new AI models on the datasets but also gives them a reference to compare the models on the same datasets with the literature review. Researchers may have incentive not to disclose their datasets, which might be for two reasons; the data is a strategic advantage either for them or for fear of having the quality of their work questioned. However, even if the datasets are not accessible, just knowing the existence of them would help researchers initiate a collaboration with the owners of datasets. Some publicly available datasets tend to be generic or only samples that Target 1.1 By 2030, eradicate extreme poverty for all people everywhere, currently measured as people living on less than $1.25 a day. 2
444
A. Spezzatti et al.
are not representative of the actual research fields. In general, most published AI-related research in major conferences use common datasets, such as ImageNet,3 CONLL 2003 (Sang and De Meulder 2003), or Wikibooks.4 Therefore, it is crucial to bring more awareness to the AI community about the datasets no matter whether they are publicly available or not, especially in the sustainability domain. Our work builds on the open data movement, whose goal is to make data visible, accessible, and usable.5 Open data will help to unlock the value of the enormous amount of information collected and stored around the world. As information becomes increasingly dispersed and voluminous, it is tedious for researchers to identify relevant data. The 2020 Open Data Report shows that progress is being made, with researchers more aware of the FAIR principles (Findability, Accessibility, Interoperability, and Reuse) and more willing to make data sharing a priority today than they were 3 years ago (Khodiyar et al. 2021). Preliminary data showed that reuse of research data is increasing from pre-COVID-19 levels. The goal of this work is to create a system that automatically identifies, collects, and describes datasets that are relevant to the global goals, to do so at scale, and to support researcher access this information. While there are a few platforms aggregating sustainability related datasets, like the Humanitarian Data Exchange,6 the coverage remains limited with certain goals missing and the important context and usability assessment is often also missing. There are also other, broader platforms that index datasets not limited to sustainability domains. This is the case of the Paper with Code7 platform that references more than 5 K dataset that have been used in AI research papers, as well as the Google Dataset Search. The first one covers only several goals but misses many important topics related to the SDGs, such as poverty or hunger, which do not produce any results. The second platform is more comprehensive, but the datasets are not clearly linked to the global goals and targets, and the impact and influence of the referenced datasets are often unclear. In order to obtain a comprehensive coverage of datasets used in sustainability research and to retrieve metadata and contextual information, we decided to extract information directly from published raw research papers, based on the assumption that in order to be published; a paper must provide a sufficient level of dataset description. The SDG Data Catalog is an open, extensible, global database containing dataset names, metadata, and research networks related to the SDGs (Hodson and Spezzatti 2021). The catalog will be open-sourced, to be accessible by the research community in order to encourage collaboration across domains and different disciplines. The catalog indexes datasets that are directly mentioned in research publications. In order to structure and classify the knowledge gathered about the datasets, we use metadata information from the research articles and datasets. Some of this metadata
https://www.image-net.org/ https://en.wikibooks.org/ 5 The State of Open Data 2020, Digital Science Report. 6 The Humanitarian Data Exchange, https://data.humdata.org/ 7 Paper with Code, https://paperswithcode.com/ 3 4
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
445
includes the impact of the publication, research topics, methods used, and information about the results and conclusion of the published work using the data. In this way, we can inform researchers about trends in datasets usage and the current major research questions being addressed using a particular dataset. With the SDG Data Catalog, we strive to provide a tool that helps bridge the gap between SDG experts and the rest of the research community. AI and more specifically natural language processing (NLP) facilitate automating the extraction and detection of certain information from large volumes of text. In particular, NLP has a major role in identifying SDGs in text data. For instance, NLP has been utilized to predict if the business and activities of companies are aligned with SDGs (Amel-Zadesh et al. 2021). The alignment was identified by analyzing the corporate sustainability reports of companies. In another research, SDG Social Index was developed by applying NLP on social media text data (Lee and Kim 2021). This index shows the global opinions toward SDGs. In this paper, we propose an NLP-based methodology to detect dataset names and information on research papers and link them with each specific SDG. In a previous work (Hodson and Spezzatti 2021), the key elements of the pipeline were described, as well as the data acquisition strategy. Early results for the named entity recognition (NER) model were presented. We used NER to identify several entities related to the datasets: names, owner, description, attributes, and samples. We had various performances on these entities with an 80% F1 score and 72% recall for the dataset name identification. In this work, we develop further the SDG Data Catalog and make a link between the datasets and SDGs. We present new parsing and selection strategies, in addition to how fine-tuning a pre-trained Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al. 2018) model further improves the performance of the NER model, achieving 91% recall and 82% F1 score. We also train a bidirectional long−/short-term memory (Bi-LSTM) with Conditional Random Field (CRF) model (Huang et al. 2015) with Global Vectors for Word Representation (GLOVE) (Pennington et al. 2014) embedding that overscore BERT on precision with an 88% score. Moreover, we develop a binary classifier to predict the existence of dataset names. Ultimately, we show preliminary results on dataset classification by SDGs, using a few-shot learning strategy with the Open AI Generative Pre-trained Transformer 3 (GPT-3) model (Brown et al. 2020) with various performances across the global goals. Eventually, these preliminary results demonstrate that with more data, the model can achieve good performance on paper categorization.
2 Information Extraction Extracting mentions of datasets in research articles is a challenging task because of the variability in the format used by authors. Datasets are not always explicitly cited; sometimes only a short description is included, and when they are, there is no single convention for doing so. For example, the Modified National Institute of
446
A. Spezzatti et al.
Standards and Technology hand-written digit dataset, a well-known dataset in the computer vision field, is frequently referred to with the acronym MNIST dataset. Similarly, the Global Hunger Index, a dataset published each year by the International Food Policy Research Institute, is frequently referred with its acronym, for example, the 2022 GHI dataset for this year. Therefore, it is important to make a link between these entities. In our experiment, we observed that several types of datasets can be identified across papers. First, datasets with a clearly identified name, which means that they are referenced in the paper, are usually in a data section. These are usually large- scale datasets that are publicized and can be found in multiple papers (e.g., 2021 GHI). On the other hand, there are datasets which are specifically created for the purpose of the experiment described in the paper. These are typically in the form of a survey or scientific measurements. This last category of datasets is the most difficult to identify. Here, we focus on the detection of the first category by leveraging AI. Identifying the second category of datasets can be done using a NER on dataset description, but this methodology is not explored in this paper. The frequency of dataset mentions varies by research area and thus by SDG domains. Some areas rely heavily on quantitative outcomes, such as “decent work and economic growth” (SDG 8) and “good health and well-being” (SDG 3), and it was easier to extract training data from these papers. For other goals, such as “life on earth” (SDG 15), “life below water” (SDG 14), or “peace, justice, and strong institutions” (SDG 16), data was much sparser, and we had to specifically extract additional articles in these areas to rebalance our models. A large span of information extraction (IE) research has been published over the last couple of years. Rule-based methodologies have been widely used (Iwai et al. 1989) for metadata extraction. Various machine learning systems have also been proposed for data mining and association mining, including Conditional Random Field (Shuxin et al. 2013), support vector machines (Kern et al. 2012), and deep learning models (Marinai 2009). Our work leverages several machine learning strategies, at different steps of the pipeline, in order to extract information and structure them, as explained on Fig. 2.
Fig. 2 SDG data catalog pipeline. The red boxes represent a step where machine learning models are implemented
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
447
For the NER task, three model architectures dominate recent research, convolutional neural network (CNN), long−/short-term memory (LSTM) models and transformers. The two last achieved state-of-the-art results on the most common NER datasets, for example, on CONLL 2003 and Ontonotes v5. These are the ones we use in our experiment on NER.
3 Data For the purpose of this work, we use a 10 K large corpus of papers. The papers were collected from online sources using a web scraper that is scalable, lightweight, and copyright-aware. The scraper leverages the data on author and paper titles from the Open Academic Graph project (Sinha et al. 2015) and uses a web search endpoint to identify instances of PDF files corresponding to a query for the title and authors. The resulting list was reviewed as being all self-published versions of academic works, falling under standard copyright protections and not under paywall academic aggregators. In order to have a balanced dataset large enough for classification by SDGs, we extracted an additional 2000 sample of papers. Indeed, the original 10 K papers were unevenly distributed across SDGs, and a few goals were not even present. We needed additional data for about half of the SDGs. Our approach was to use another web scraper that specifically targets certain SDGs. The basis of this scraper is to make a search on open-access repositories of PDF papers. We started with the ArXiv portal, which provides access to 2 M scholarly articles. We were eventually limited by the scope of the portal that is specialized in Physics, Mathematics, Computer Science, Finance, Economics, and Electrical Engineering. A few SDG topics were not covered like gender, justice, poverty, or hunger. Therefore, we also used two other portals, the CORE, a global-wide content aggregation of open-access research literature (Knoth et al. 2012), and the Education Resources Information Center (ERIC) portal. The first one aggregates more than 200 M papers, on diverse topics, and provides an API to extract full text PDF automatically using keyword search. However, we were limited by the number of API calls available in a day by the platform. The ERIC portal provides access to 1.5 M publications, focused on education research. Using a keyword search in these portals, the top 100 valid PDF results of each search are downloaded and saved to our second corpus. The success rate of the extraction process was 83% for papers that had a PDF version downloadable. After combining the two corpora (12 K papers), the resulting distribution across the goals was improved but still not evenly distributed, with a couple of goals that are still marginally represented.
448
A. Spezzatti et al.
4 Methodology In the development of the SDG data catalog, we first extract paper metadata and body text and split the text into paragraphs. After parsing the papers, in order to link SDGs with dataset names, we define three tasks: (1) binary classification to predict if a dataset is mentioned, (2) NER which detects the name of datasets, and (3) text classification to predict the SDGs. The binary classifier is trained to identify the existence of dataset mentions in paragraphs and generate a smaller set of candidate paragraphs for the NER annotations. Using NER models, we aim to extract the name of datasets from the paragraphs. We annotate the generated candidates manually. The annotations are added incrementally using an active learning strategy. We identify the SDGs by training models on annotated text data. For this task, we annotate each papers’ abstract to different SDGs and train a multi-label text categorization model. In Fig. 2, we present a pipeline to visualize the steps we consider in the SDG data catalog development. The training dataset used for binary classification is unbalanced, with negative examples significantly outnumbering the positive ones. In the original data, less than 10% of the paragraphs contain a mentioned dataset. To get around this problem, we add an under-sampling layer that rebalances our dataset. After under- sampling, the resulting proportion of paragraphs with no mention is 56.8%.
4.1 Parsing Papers We used the CERMINE Java library (Tkaczyk et al. 2015) to process full texts and parsed references for the PDF files. CERMINE uses support vector machine classifiers to divide papers into zones and then to further classify them into various metadata classes. The machine learning-based solution offers great flexibility on paper layouts and an important variety of extracted metadata, including DOI, affiliation, and year of publication, not always found in other systems like PDFX8 and GROBID9 (Lopez 2009). These metadata will be critical in designing the catalog and connecting entities in a knowledge graph. Overall, we found that the quality of the extracted text is superior with CERMINE compared to other off-the-shelf python packages available. Using CERMINE, we were able to process 93% of the papers from our initial database. From the resulting XML documents, we extract metadata, including authors, abstracts, titles, affiliation, and DOI. The body text is broken down into paragraphs. While sentence granularity was considered too short to provide the full context for the model to correctly and efficiently identify the dataset reference and the section granularity was considered too long, the paragraph is a good compromise to obtain https://pypi.org/project/pdfx/ https://github.com/kermitt2/grobid
8 9
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
449
sequences with sufficient contextual information. Then, in order to reduce noises in the data and have a more targeted annotation effort, we select candidate paragraphs using a binary classification model.
4.2 Binary Text Classification The binary classification step is added to generate a relevant set of candidate paragraphs and make the NER annotation process more efficient. In a previous work (Hodson and Spezzatti 2021), we used the inclusion of the word “data” in paragraphs as a proxy for candidates and focused our annotation effort only on these paragraphs. While this method has already importantly reduced the number of candidates, we found that mentions of datasets were present in less than 20% of the candidates, requiring us to spend an important amount of time on annotation in order to obtain a good number of dataset names. To circumvent this problem, we trained a binary classification model to generate the list of candidates. The binary classification task is a multistep process that involves creating a labelled dataset, data cleaning, and model development. First, paragraphs from the extracted PDFs are arranged in table rows, with each row containing one paragraph. A list of known dataset label names is used to label paragraphs with either a 1 or 0 based on the presence of dataset names from the list. Results for this labelling task are summarized in (Table 2). This dataset is then processed for removal of stop words from the Natural Language Toolkit (NLTK) corpus (Loper and Bird 2002), numbers, non-ASCII characters, and symbols. Vectorizing the text is a practical technique in NLP which makes the text understandable for machines. For the binary classification, we use TF-IDF which stands for term frequency-inverse document frequency (Ramos 2003). TF-IDF considers both the frequency of each word of a term and the relevancy and importance of each word of a term in a document. It is a product of term frequency (which measures the frequency of the word in a document) and inverse document frequency (which measures the importance of the word by giving more weight to rare words in a term with respect to the document). After vectorizing each paragraph, we trained and compared machine learning algorithms, such as support vector machine (SVM), logistic regression (LR), naive Bayes (NB), and random forest (RF). The performance of these models using TF-IDF vectorizer is presented in Table 4. We also tried XLNet, a generalized autoregressive pre-training for language understanding, which is state of the art in the text categorization task (Yang et al. 2019). A random split function is used to randomly allocate data for training, validation, and test categories. Several machine learning models are developed to accomplish this task. Each model had its own parameters for this split. For instance, for the XLNet model, we consider 70% for training, 15% for validation, and 15% for the test set. However, for the other models, we split the data to allocate 85% of data for training and 15% of data for testing the models. With this practice, we test all the models on 15% of the data.
450
A. Spezzatti et al.
4.3 Named Entity Recognition (NER) From the candidate paragraphs, we manually annotated the dataset names in an initial set of 1 K using the Prodigy annotation tool.10 We train an NER model on this set, based on a CNN architecture from the Spacy NLP library.11 We then used uncertainty sampling with beam search workflow to correct the entities for which the model is most uncertain about. This strategy helps to quickly improve the model performances by guiding him toward the most important information in the dataset. In this process, we progressively improved the F1 score of our model and created a 5 K large annotated training set focused on dataset name identification. After building a robust training set of annotations, we compared two alternative approaches for the NER, Bi-LSTM with Conditional Random Field (CRF) and fine- tuning a pre-trained BERT model. Both approaches are inspired by the recent literature on NER and the performances of these models on standard datasets. In addition to the dataset names, a few other entities corresponding to datasets’ metadata are also annotated: description, owner, samples, attributes, methods, and results. The prediction of these entities is not in the scope of this paper. 4.3.1 Bidirectional LSTM with CRF LSTM networks have been introduced to get around the long-term dependency problem encountered when using recurrent neural networks (RNN) (Hochreiter et al. 1996). With the use of gates to either add or forget information, LSTM can leverage information from words located several sentences ahead in the text. A Bidirectional LSTM is a model composed of two LSTMs: one that takes the input in the forward direction and the other that takes it in the backward direction. This increases the contextual information used by the model compared to a simple LSTM. The CRF layer is used to jointly decode the labels in each sequence. Our NER problem is framed as a sequence tagging task, using the BILUO scheme for tag representation (Table 1), in which the entities are tagged with the semantic category preceded by one of the defined prefixes. The 5 K set of annotations is split into an 85% training set and 15% test set that is used to report the results. We train our model using 300-dimensional word-embedding features trained on the CommonCrawl dataset with the Global Vectors for Word Representation (GLOVE) (Pennington et al. 2014). We used two of the embeddings: one with 400 K vocabulary size and the other with 1.9 M vocabulary size. We use two
Montani, I. & Honnibal, M (a new annotation tool for radically efficient machine teaching. Artificial intelligence, Prodigy, 2018). Prodigy is an annotation software powered by active learning used to make data annotation more efficient and convenient. It supports different types of machine learning problems such as classification and named entity recognition. 11 Honnibal, M., & Montani, I. (2017). spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. 10
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
451
recurrent network layers of 100 dimensions and optimize using the ADAM optimizer (Kingma et al. 2014). The two models were trained on four epochs and used a 0.05 learning rate. 4.3.2 BERT Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al. 2018) is a transformer model trained in a self-supervised way on a large corpus of English data on Wikipedia and BookCorpus. The base model is composed of 110 M parameters and the large one with 340 M parameters; in this paper we use the BERT base model. The large size of the training corpus and the complexity of the model allowed it to learn the inner representation of the language and to be an excellent base model for fine-tuning on multiple different natural language processing tasks, such as NER. Since the publication of its first paper, different versions of BERT have appeared at the top of model rankings for their state-of-the-art performances. The same dataset is used as for the Bi-LSTM CRF model. We used the BERT base uncased pre-trained model that is fine-tuned on five epochs with a 5e-5 learning rate. Using BERT and a transfer learning methodology had several advantages in our case. First, the model benefits from the pre-trained data and already understands the English language before the fine-tuning step; this makes it able to learn with much less data points. In addition, BERT is built with transformers that are nonsequential and use self-attention, making it very good in using long dependencies across multiple sentences, required for our identification of dataset names.
4.4 Classification by SDGs We classified research papers by the main topic they cover, using a text categorization model with 18 classes: the 17 global goals and a label to specify that no SDGs are among the main topics of the publication. Since we only wanted to identify main topics, our approach was to only use the article’s abstract to make the prediction. The abstract generally covers why the research was initiated and what methodologies and results are developed in the paper. We assumed that this should provide enough contextual information to identify when one or more of the global goals are studied and discussed later in the paper. Table 1 BILUO scheme used for the NER on dataset name Tags B-DATASET_NAME I-DATASET_NAME L-DATASET_NAME U-DATASET_NAME 0
Description The first token of a multi-token entity An inner token of a multi-token entity The final token of a multi-token entity A single-token entity A non-token entity
452
A. Spezzatti et al.
A set of 2000 abstracts was manually annotated, using the Prodigy software. To make the annotation effort more efficient, a list of keywords was used for each SDGs that are used as a proxy to suggest an abstract to annotate. For example, for SDG 1 (end poverty) keywords such as “unemployment,” “disparities,” “microfinance,” and “poverty” were used. A few abstracts were ambiguous or not clear on whether the paper was actually covering a given goal. There were also some general science abstracts that could be misinterpreted as SDG 3 (health) but were not actually discussing this SDG in their contents. These examples were rejected, and we only used abstracts with good clarity on what goals were covered. From the manual annotations, we trained a multi-label supervised text categorization model. Multi-label text categorization is a type of classifiers used to organize text documents into multiple non-mutually exclusive classes. Our training set was unevenly distributed across SDGs and composed of 1700 examples (300 being left in the test set), and a third of the goals had too little data to produce any result. Data availability is a barrier that can prevent NLP models from performing well, but recent developments in NLP have shown that this limitation can be solved using a technique known as few-shot learning (FSL). In FSL, a small sample of training data is provided to the model that needs to make reliable predictions with only limited information. FSL usually works well on large pre-trained language models. We used the Open AI GPT-3 (Generative Pre-trained Transformer) (Brown et al. 2020) model, a third-generation, autoregressive language model that leverages deep learning to generate human-like texts. GPT-3 uses 175 billion parameters, which make it one of the largest language models available today. This expensive computational approach makes this model versatile and good for a wide range of NLP tasks, including text categorization. This model is especially useful when the training data is a little, and most other machine learning methodology fail to perform well. To use the pre-trained GPT-3 model, we leverage the API that Open AI released in early 2020. Following GPT-2 which was initially not publicly released to prevent potential harm from misuse of its powerful architecture, this API was released to control potential harmful use of the model, such as spamming or harassment. The API provides access to three different versions of the model: “ada,” “babbage,” “curie,” and “davinci.” “Ada” is the fastest and cheapest model, which can perform well on simple classification tasks, “babbage” and “curie” can perform more nuanced tasks and perform better on more complicated tasks, and, finally, “davinci” is the largest (175 billion parameters), most powerful, longest to train, and most expensive of the available models. In this study, we used the “ada” version, as we try to validate the ability of the model to learn with a limited amount of available data. In the future, some of the more powerful models can be tested.
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
453
5 Results and Discussion 5.1 Parsing Papers We parsed 12 K PDFs using CERMINE. Success rate is summarized on Table 2. The variability of the layouts of the papers included in our corpus explains the limitation of the extraction methodology.
5.2 Binary Text Classification As shown in Table 3, the dataset used for training and testing of the binary classification is highly imbalanced and favored the “0” class examples, which meant that a dataset label name was not present in the paragraph. This was detrimental to binary classifier performance. We used random under-sampling to remove majority class (“0”) examples until there was greater parity between the “0” class and “1” class. Several machine learning models with different word representations and one deep learning model, XLNet, are compared for the binary text classification. The results are presented in Table 4. XLNet shows the best performance, with 95% recall on non-interesting paragraphs and 92% precision on the interesting ones. The TF-IDF with RF shows comparable performance. For this step, we are most interested in having a high precision in identifying paragraphs with dataset mentions, which make the two models mentioned above great candidate generation models, capable of discarding the majority of useless paragraphs (95% for XLNet), without losing many interesting ones. XLNet is the model that is used to generate the candidates that are used for the NER annotations.
Table 2 Success rate in the extraction of metadata from the 10 k corpus of papers Authors 0.93
Abstract 0.91
Titles 0.96
Affiliations 0.80
Table 3 Dataset imbalance and random under-sampling. Positive class examples refer to paragraphs where the dataset label name is present and labelled as a “1” for binary classification
Original dataset random under-sampling
Total paragraphs 31,248 5281
Positive class examples (“1) 2281 2281
Ratio of positive class/total paragraph 7.3% 43.2%
454
A. Spezzatti et al.
Table 4 Binary text classifier with different types of models. We tested support vector machines, logistic regression, naive Bayes, and random forest in combination with TF-IDF. XLNet results are also shown
TF-IDF/SVM TF-IDF/LR TF-IDF/NB TF-IDF/RF XLNet
Recall 0 0.92 0.93 0.93 0.94 0.95
1 0.75 0.67 0.66 0.73 0.75
Precision 0 0.84 0.80 0.80 0.83 0.83
1 0.86 0.86 0.87 0.89 0.92
F1 score 0 0.88 0.86 0.86 0.88 0.88
Accuracy 1 0.80 0.75 0.75 0.80 0.79
0.85 0.82 0.82 0.85 0.86
Table 5 Comparison of precision, recall, and F1 scores results on the test set for the three NER models trained Model GLOVE 400 k voc + Bi-LSTM CRF GLOVE 1.9 M voc + Bi-LSTM CRF BERT base uncased
Precision 0.78 0.88 0.75
Recall 0.60 0.74 0.91
F1 0.68 0.80 0.82
5.3 Named Entity Recognition We compared our results on dataset names NER across the different models used. The metrics used to evaluate performances are precision, recall, and F1 score. For this work recall has a particular importance as the goal is to be able to retrieve as many dataset’s mentions as we can, knowing that we can always clean the extracted list afterward. The main results of our experiments are presented on Table 5. Using a word embedding with a larger vocabulary size improves by 14% the recall score and by 12% the F1 score. We trained our models on 8 CPUs. The BERT architecture was longer to train, requiring 350% more time to train on the same dataset with the same number of epochs and batch size, compared to the LSTM with GLOVE 1.9 M embedding. With the Bi-LSTM we obtain an 80% F1 score with a good precision but a limited recall below 75%. The BERT base uncased fine-tuned model, while showing a lower precision score than the Bi-LSTM CRF, outperformed it on F1 and more importantly on recall, which is our metrics of interest. With 82% F1 and 91% recall, the results demonstrate the capability of the model to retrieve efficiently the dataset name information. Further improvement of the model is still limited by the size of the training set, and additional annotations will be needed to improve these results and validate their generalization. For this we plan to focus our efforts on improving the active learning strategy and annotate a more diverse dataset across SDG areas.
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
455
Fig. 3 Distribution of the proportion of SDG labels in our training set Table 6 Comparison of precision, recall, and F1 scores results on the test set for the text categorization model SDG SDG1 SDG2 SDG3 SDG4 SDG5 SDG7 SDG8 SDG10 SDG12 SDG13 SDG14 SDG15 SDG16
Recall 0.64 0.36 0.67 0.13 0.71 0.50 0.50 0.42 0.13 0.56 0.14 0.46 0.50
Precision 0.86 0.57 0.43 0.20 0.63 0.25 0.40 0.56 0.25 0.42 0.50 0.55 0.50
F1 0.74 0.44 0.58 0.16 0.67 0.33 0.44 0.48 0.16 0.48 0.22 0.50 0.50
5.4 Classification by SDGs The proportion of annotated SDG by goal is variable, with most SDGs annotated being SDG 1, SDG 3, SDG 8, and SDG 10 (Fig. 3). The results of the text categorization model on 13 of the 17 classes are shown on Table 6. The four goals not included here did not have enough training examples for the model to learn. The performances vary greatly across the goals, with a better
456
A. Spezzatti et al.
performance on those with the more training data available, SDG1 and SDG3. The model understandably struggled to learn well for more than half of the SDGs, because the variability of the subjects and contexts related to these goals was not covered with the limited data available. The weighted average F1 score across the 13 goals is 0.47. The interesting results here are for SDG 1, the SDG for which we had more annotated data. Even though recall is still a bit low, the precision and F1 obtained are good and can probably be further improved. What these results demonstrate is that by collecting more data, probably targeting at least 500 training samples for each goal, we should be able to achieve good performances on the paper categorization task. We may still observe variability due to the ambiguity and complexity of the context to learn behind each goal, as can be noted across goals with similar amounts of data. For example, SDG 4, 5, 8, and 10 had a comparable training size, but SDG 5 outperformed the others, while SDG 4 underperformed. There may be several explanations for this. First, the quality of the annotated data, to have a robust model, we need to have sufficient diversity and coverage of relevant topics for each goal. For example, for SDG 4, some relevant topics are literacy, numeracy, scholarships, teaching, or access to education facilities. Several hundreds or thousands of papers may be necessary to cover these. Another explanation for this variability can be the noise included in the data. An important number of these papers were covering more than one goal at the same time, making it harder to distinguish the individual contextual elements, especially when two goals frequently appear together like SDG 1 with SDG 10 or SDG 4 with SDG 11.
5.5 Discussion In development of the SDG data catalog, a few elements still need to be developed. An entity linking model will be created on top of the NER in order to disambiguate dataset names identified from the NER and link them to a unique identifier within a knowledge base. A user interface will also be created and will be free and openly accessible online. We plan to have it deployed and available in 2022. The retrieved datasets are validated for their quality and relevance to current research. They are evaluated for update frequency, accessibility, ownership, and completeness. Some of these evaluations, like accessibility and frequency, still need to be done manually at this time, but we plan to automate some of them in the future. In a recent work, researchers have shown that large AI models trained on millions on parameters with huge datasets can emit more 600 k pounds of carbon dioxide equivalent (Strubell et al. 2019). Knowing this issue, we tried to limit, in certain ways, the environmental impact of our work. First, using active learning to annotate the data allowed the models to learn from a smaller dataset, which was collected to maximize the information that each new data point adds. Eventually, we trained our models on a few tens of thousands of examples. Once the catalog is in production,
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
457
we plan to monitor and measure the carbon emissions and sustainability impact of maintaining the platform and models using existing cloud solutions. As described in the SDG classification section, the availability of data varies by goal. Therefore, not all goals will have the same number of datasets retrieved. While this may create a bias toward goals with more available data and existing literature, we believe this would be limited as the catalog is intended to bring visibility to existing resources, while recognizing where there may be missing elements. Not all goals will ultimately benefit from data science and AI equally, as some goals may require more qualitative research, policy engagement or low-tech solutions.
6 Conclusion and Future Work Recent statistics have shown that the research community is generating more and more publications and data each year, estimated at more than 2.5 M new publications each year by the World Bank.12 Researchers can easily be overwhelmed by the information as this makes it even more difficult to navigate the data sea and identify what is important. At the same time, work on sustainable development is becoming more urgent, as the new IPCC report suggests, and we need to make it easier for people to access relevant data resources. In this paper, we described the methodology of a system in active development, the SDG Data Catalog. This system will support the research community to work and advance on the 17 SDGs. We presented the different steps of our pipeline to extract dataset names. We first showed how a binary classifier was able to efficiently extract candidate paragraphs, by comparing different machine learning methodologies. The XLNet model and RF model with TF-IDF representation showed the best performance and good ability to filter out useless paragraphs without missing too many relevant ones. From these paragraphs, we used NER to extract dataset name entities. Two state-of-the-art deep learning models have shown good performances in this task: a Bi-LSTM with CRF network and a BERT fine-tuned on this task. While the results demonstrate that we can identify an important proportion of the datasets, we can still further improve the models and validate the generalization of the results to more papers. For this, adding more information with more annotations will be critical, and improving the active learning strategy is an important element in making this efficient. The main contributions of this paper are the establishment of a new baseline recall score and F1 score for the dataset name identification task using BERT. We also established new methodologies for extracting relevant research articles, parsing them, and extracting paragraph candidates from them using a binary classifier. Finally, we also presented results on categorizing articles according to the SDGs,
12
The World Bank, https://www.worldbank.org
458
A. Spezzatti et al.
which will help organize datasets into knowledge networks, and showed that with additional annotated data, good results can be obtained. It has been 6 years since the implementation of the SDGs started, and we are 9 years away from the actual target date, and much of the data is either not available, not findable, or even out of date. In order to accelerate the development of sustainable solutions, there is an urgent need for a unified structured catalog of available data. This will not only accelerate the development of AI solutions for the SDGs but also highlight gaps in the data that are missing to work toward certain indicators and goals. As a recent study demonstrated (Allen et al. 2021), there are already a wide range of datasets identified that are relevant to the SDGs and that could help monitor 15 of the goals and 69 of the indicators. This does not include all datasets that are used for publications and that are hidden in huge amounts of published papers. The actual potential of the existing data is therefore even higher and could be enhanced in the years to come once gaps are identified. This is the goal of the SDG Data Catalog to holistically shed light on these datasets. Once the data are identified, important work must also be done to understand the quality, validity, and impact of the datasets. Biases inherent in the datasets, such as in some healthcare data that discriminate against certain ethnicities, must also be identified. With this additional information, researchers and decision-makers can be informed more effectively about the important datasets to use. SDGs can also be more efficiently monitored across the world by tracking local indexes and metrics, informing about progress.
References Allen, Cameron, Maggie Smith, Maryam Rabiee, and Hayden Dahmm. 2021. A Review of Scientific Advancements in Datasets Derived from Big Data for Monitoring the Sustainable Development Goals. Sustainability Science 16 (5): 1701–1716. Amel-Zadeh, Amir, Mike Chen, George Mussalli, and Michael Weinberg. 2021. NLP for SDGs: Measuring Corporate Alignment with the Sustainable Development Goals. Available at SSRN 3874442. Bautista-Puig, Núria, Ana Marta Aleixo, Susana Leal, Ulisses Azeiteiro, and Rodrigo Costas. 2021. Unveiling the Research Landscape of Sustainable Development Goals and Their Inclusion in Higher Education Institutions and Research Centers: Major Trends in 2000–2017. Frontiers in Sustainability: 12. Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. Language Models Are Few-Shot Learners. Advances in Neural Information Processing Systems 33: 1877–1901. Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre- Training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805. Fuso Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 1–10.
Sailing the Data Sea to Advance Research on the Sustainable Development Goals
459
Hochreiter, Sepp, and Jürgen Schmidhuber. 1996. LSTM Can Solve Hard Long Time Lag Problems. Advances in Neural Information Processing Systems 9. Hodson, James, and Andy Spezzatti. 2021. Hidden in Plain Sight: Building a Global Sustainable Development Data Catalogue. In ICT Analysis and Applications, 803–811. Singapore: Springer. Huang, Zhiheng, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF Models for Sequence Tagging. arXiv preprint arXiv:1508.01991. Iwai, Isamu, Miwako Doi, Koji Yamaguchi, Mika Fukui, and Yoichi Takebayashi. 1989. A Document Layout System Using Automatic Document Architecture Extraction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 369–374. Kern, Roman, Kris Jack, Maya Hristakeva, and Michael Granitzer. 2012. TeamBeam-Meta-Data Extraction From Scientific Literature. D-Lib Magazine 18 (7): 1. Khodiyar, Varsha, Heidi Laine, David O’Brien, Raul Rodriguez-Esteban, Yasemin TÃrkyilmaz- van der Velden, Grace Baynes, Matthew Brack, et al. 2021. Research Data: The Future of FAIR White Paper. Figshare. https://doi.org/10.6084/m9.figshare.14393552.v1. Kingma, Diederik P., and Jimmy Ba. 2014. Adam: A Method For Stochastic Optimization. arXiv preprint arXiv:1412.6980. Knoth, Petr, and Zdenek Zdrahal. 2012. CORE: Three Access Levels to Underpin Open Access. D-Lib Magazine 18 (11/12): 1–13. Lee, Raejung, and Jinho Kim. 2021. Developing a Social Index for Measuring the Public Opinion Regarding the Attainment of Sustainable Development Goals. Social Indicators Research 156 (1): 201–221. Loper, Edward, and Steven Bird. 2002. Nltk: The Natural Language Toolkit. arXiv preprint cs/0205028. Lopez, Patrice. 2009. GROBID: Combining Automatic Bibliographic Data Recognition and Term Extraction for Scholarship Publications. In International Conference on Theory and Practice of Digital Libraries, 473–474. Berlin, Heidelberg: Springer. Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou. 2021. Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. IPCC. Cambridge University Press. Marinai, Simone. 2009. Metadata Extraction from PDF Papers for Digital Library Ingest. In 2009 10th International Conference on Document Analysis and Recognition, 251–255. IEEE. Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. 2014. Glove: Global Vectors For Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532–1543. Ramos, Juan. 2003. Using tf-idf to Determine Word Relevance in Document Queries. Proceedings of the First Instructional Conference on Machine Learning 242 (1): 29–48. Sang, Erik F., and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. arXiv preprint cs/0306050. Shuxin, Zhu, Xie Zhonghong, and Chen Yuehong. 2013. Information Extraction From Research Papers Based on Conditional Random Field Model. TELKOMNIKA Indonesian Journal of Electrical Engineering 11 (3): 1213–1220. Sinha, Arnab, Zhihong Shen, Song Yang, Hao Ma, Darrin Eide, Bo-June Hsu, and Kuansan Wang. 2015. An Overview of Microsoft Academic Service (MAS) and Applications. In Proceedings of the 24th International Conference on World Wide Web, 243–246. Strubell, Emma, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. arXiv Preprint arXiv:1906.02243. Tkaczyk, Dominika, Paweł Szostek, Mateusz Fedoryszak, Piotr Jan Dendek, and Łukasz Bolikowski. 2015. CERMINE: Automatic Extraction of Structured Metadata From Scientific Literature. International Journal on Document Analysis and Recognition (IJDAR) 18 (4): 317–335.
460
A. Spezzatti et al.
Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco. 2020. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun 11, 233. Yang, Zhilin, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, and V.Le. Quoc. 2019. Xlnet: Generalized Autoregressive Pretraining for Language Understanding. Advances in Neural Information Processing Systems 32. Zhang, Daniel, Saurabh Mishra, Erik Brynjolfsson, John Etchemendy, Deep Ganguli, Barbara Grosz, Terah Lyons, et al. 2021. The AI Index 2021 Annual Report. arXiv Preprint arXiv:2103.06312.
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11) Shivam Gupta and Auriol Degbelo
Abstract Artificial intelligence (AI) presents opportunities to develop tools and techniques for addressing some of the major global challenges and deliver solutions with significant social and economic impacts. The application of AI has far-reaching implications for the 17 Sustainable Development Goals (SDGs) in general and sustainable urban development in particular. However, existing attempts to understand and use the opportunities offered by AI for SDG 11 have been explored sparsely, and the shortage of empirical evidence about the practical application of AI remains. In this chapter, we analyze the contribution of AI to support the progress of SDG 11 (Sustainable Cities and Communities). We address the knowledge gap by empirically analyzing the AI systems (N = 29) from the AI×SDG database and the Community Research and Development Information Service (CORDIS) database. Our analysis revealed that AI systems have indeed contributed to advancing sustainable cities in several ways (e.g., waste management, air quality monitoring, disaster response management, transportation management), but many projects are still working for citizens and not with them. This snapshot of AI’s impact on SDG11 is inherently partial yet useful to advance our understanding as we move towards more mature systems and research on the impact of AI systems for the social good. Keywords Artificial intelligence · Sustainable cities · AI for SDGs · Environment · Citizen participation · SDG 11
S. Gupta (*) Bonn Alliance for Sustainability Research, University of Bonn, Bonn, Germany Detecon International GmbH, Berlin, Germany e-mail: [email protected] A. Degbelo Institute of Geoinformatics, University of Münster, Münster, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8_25
461
462
S. Gupta and A. Degbelo
1 Introduction Artificial Intelligence (AI) has the potential to mitigate several issues facing cities, such as road safety, waste management, air pollution, and disaster risk reduction (Gupta et al. 2021). Examples of recent AI systems for improved well-being in cities include a tool for semiautomatic digitization of sketch maps to support the inclusion of indigenous communities through the documentation of their land rights (Degbelo et al. 2021; Chipofya et al. 2020), a system for traffic monitoring based on wireless signals (Gupta et al. 2018a), approaches for efficient waste management (Barns 2019), air quality modelling (Gupta et al. 2018b) and urban health monitoring systems (Allam and Jones 2020). Nonetheless, a lack of systemically observed knowledge and multidisciplinary perspective exists with limited coherence about the characteristics of AI contributions to sustainable cities (Zheng et al. 2020). Furthermore, as Israilidis et al. (2021) argued, the current research landscape is mainly focused on technical issues, leaving behind social impacts, participation capabilities, and knowledge sharing aspects with multi-stakeholder and citizen- inclusive development. Thus, the implementations of AI remain poorly understood. To address this gap, this chapter looks into AI systems that contribute to advancing sustainable cities in several ways serving the Sustainable Development Goal (SDG) 11 proposed by the United Nations (UN) within the 2030 Agenda. The question asked is what are AI4SG contributions for more sustainable cities in the digital age? AI4SG is defined in line with Cowls et al. (2021a) as the development of AI systems that enable socially preferable or environmentally sustainable developments. We look into the nature of the contribution of AI systems to more sustainable cities (what solution is proposed, to whom, and where) and the SDG indicators covered (which indicators are covered, which are still underrepresented). The analysis also covers the six citizen-centric challenges for smarter cities brought forth in Degbelo et al. (2016): the engagement of citizens, the improvement of citizens’ data literacy, the pairing of quantitative and qualitative data to unlock new insight about city phenomena, the development of open standards, the development of personal services, and the development of persuasive interfaces, which can be supportive of inclusive progress towards SDG 11.
2 Related Work Cities are complex structures, growing worldwide at a fast pace (Batty 2009). Commuter movement, capital flow, resources, and commodities lead to the emergence of city regions (Axinte et al. 2019). Due to increasing population size, density, and location, cities are also prone to adverse effects such as soil, air, and water pollution and impacts of climate change, affecting surrounding rural areas. Prompt action is required in the form of new and innovative infrastructures and services for
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
463
addressing the increasing demands coupled with environmental and climate change impacts (Solecki et al. 2018). Urban areas are increasingly digitalized over the last few decades due to significant advancements in digital technologies (Ismagilova et al. 2019). Cities are considered as the drivers for change and innovation (Fitjar and Rodríguez-Pose 2020). Several innovative approaches are being developed to gather detailed insights and opportunities for the planning and management of cities (Sharda et al. 2021; Rogers et al. 2020). Notions such as smart cities touched upon several dimensions or application domains where technological infrastructure, system integration, and data analysis can help us optimize resources in cities (Ismagilova et al. 2019). At the same time, cities are also trying to reconfigure themselves for a sustainable future, with the aim to improve the quality of life for all citizens (Barlacchi et al. 2015; Bibri 2021). The importance of cities is well recognized by the internationally agreed Agenda 2030 Sustainable Development and the Paris Agreement to reduce the impact of climate change (Aust 2019). In fact, two-thirds of all Sustainable Development Goals (SDGs) can only be achieved in and with the help of cities (Acuto 2016). Emphasizing the opportunities offered by digital technologies at a city scale can significantly contribute towards the progress of sustainable development in line with the 2030 Agenda.
2.1 AI for SDG 11 Artificial intelligence (AI) and machine learning approaches are emerging as critical components for a smart and sustainable future by optimizing the services and addressing several social, environmental, and economic aspects in the cities (Allam and Dhunny 2019). Thus, they could support progress towards SDG 11 (i.e., “make cities and human settlements inclusive, safe, resilient and sustainable”). AI is fostering further advancements in technologies such as the Internet of Things (IoT), blockchain, robotics, precision health, and quantum computing (Firouzi et al. 2021; Dinh and Thai 2018; Rajan and Saffiotti 2017; Tajunisa et al. 2021; Dai 2019), helping in making sense of large quantities of data by utilizing the innovation ecosystems that majorly exist in cities (Rabah 2018). AI is instrumental in advancing the digitization processes in several cities (Sougkakis et al. 2020; Villagra et al. 2020; Majumdar et al. 2021), transforming them into more inclusive and sustainable environments. Advancements in Earth Observation (EO) technologies empowered with artificial intelligence (AI) is supporting various aspects of cities (Kuffer et al. 2020, 2021). From land use and pollutants monitoring in cities to supporting efficient energy and resource consumption (Yatoo et al. 2020; Shahid et al. 2021; Șerban and Lytras 2020), AI provides us with the opportunities to address complex social inequalities and environmental interrelationships. Therefore, AI could be considered a crucial tool for addressing a wide array of challenges for future sustainable cities. Given the complexity and challenges of rapid urbanization, exploring the
464
S. Gupta and A. Degbelo
wide range of potential solutions across several domains may be desirable, as evident from the work mentioned above. The possibilities offered by AI can only be utilized to their full potential for SDGs if the ethical, social, and environmental values are uniformly met (Hilbert 2016; Gupta et al. 2021). The targets within the SDGs are intertwined as a unified framework in the form of 17 goals, forming an “indivisible whole” (Nilsson et al. 2016). The goals and the targets are interlinked and depend on each other; but the views on how they are linked are still evolving (Nilsson et al. 2016; Vinuesa et al. 2020). Also, the capacity for integrating and intersecting intelligence from diverse domains for AI applications is growing. AI applications have the potential to make a significant contribution when several complex aspects are well integrated into the system for more inclusive action (Allam and Dhunny 2019). There also exists a significant gap between cities having not made sufficient progress in such digitization sphere, creating a social divide and increasing inequalities (Reddick et al. 2020; Chase 2020). The introduction of AI also risks amplifying some social and ethical challenges such as unfair bias, discrimination, or opacity in decision-making (Galaz et al. 2021). AI systems also require large amounts of energy and cause greenhouse gas (GHG) emissions (Taddeo et al. 2021; van Wynsberghe 2021). Thus, highlighting that the application of AI and associated technologies, if not used mindfully, could also hurt social and economic aspects along with impacts on climate, biodiversity, and ecosystems around the world (van Wynsberghe 2021). Therefore, it is crucial to be careful of the application of AI to ensure that efforts to harness the advantages of this technology outweigh its associated negative impacts.
2.2 Citizen-Centric Approach for SDG 11 The aim of SDG 11 includes encouraging the development of cities and communities in a more inclusive, safe, resilient, and sustainable manner by making urbanization more inclusive for stakeholders, reducing the adverse effects of natural disasters, furthering local to global policies for sustainable development. SDG 11 addresses the urban level with 10 targets and 15 indicators developed by the United Nations (2015). Implementation pathways lack comprehensive understanding, as coordination is required in terms of efforts from various stakeholders, embracing flexible and adaptive processes to accommodate changing circumstances, and allocating resources to address uncertain future threats, especially in the context of resilience (Croese et al. 2020). Limited evidence exists about the integration of genuine sustainability when we are more techno-centric, suggesting a knowledge-based development to address the existing complexities (Yigitcanlar et al. 2019). AI could support the progress of SDG11 through new solutions that enhance the food, health, transport, water, and energy services to the population. However, to date, less attention has been paid to the involvement of citizens in the process (Martens 2019), which has enormous potential to contribute towards the SDGs progress by localization (Li et al. 2018).
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
465
AI systems enabling citizen participation can enhance the action towards sustainability through the collection of timely, high-resolution data, which could enhance the knowledge base required for SDGs progress (Fritz et al. 2019). Social and cultural information dictates the context in which the AI is implemented. Citizen participation provides the public with the opportunity to support policy development, leading to trust-building, credibility, and ultimately inclusiveness in taking actions towards SDGs. SDGs require actions that can transform existing practices across sectors. Fraisl et al. (2020) demonstrate that citizen participation “could contribute” to 76 indicators (33%) of SDGs, coverage of 60% indicators of SDG 11. It is crucial to integrate citizen-centric pathways to balance technological, social, and environmental factors (Kirwan and Zhiyong 2020). The experientially trained or traditional or local knowledge from citizens could be a valuable source for addressing concerns related to disaster (Munsaka and Dube 2018), urban planning (Antweiler 2019), and environmental monitoring along with climate change mitigation (Makondo and Thomas 2018; Magni 2017). Citizen participation could act as relevant agents of change to mobilizing civil society for targets and indicators concerning sustainable consumption (Micheletti et al. 2014), air quality monitoring (Gupta et al. 2018b), disaster risk mitigation (Ferri et al. 2020), and sustainable and inclusive urbanization (Newman et al. 2020). Multi-stakeholder participation and citizen-centered knowledge hubs could be instrumental for sustainable cities (Saner et al. 2020).
2.3 Exiting Gaps AI is not the sole solution for developing sustainable cities, but as illustrated above, efforts to use AI for sustainable cities are increasing rapidly. These could help address complex challenges faced by humanity in social, environmental, and economic aspects (Vinuesa et al. 2020). If utilized carefully, outcomes supportive of sustainable development can be harnessed at a grand scale. Therefore, it is essential to learn the impact of AI as a tool for global good in a more systematic manner. Understanding this impact requires an understanding of factors that determine the advantage of using AI considered in a particular context as a part of sociotechnical systems (Cowls et al. 2021b). The SDGs here may provide a useful framework. Nevertheless, SDGs are sometimes considered ambitious and wide-ranging (Pekmezovic 2019). This ambitious and wide-ranging nature also inspires and stimulates action for sustainable development (Walker et al. 2019). Several systematic approaches were undertaken in the recent past to gather evidence of the use of AI for SDGs worldwide, resulting in the generation of datasets and knowledge bases organized in different forms, presenting a distinct picture of the impact AI has on SDGs (Vinuesa et al. 2020; Tomašev et al. 2020; Cowls et al. 2021b; Palomares et al. 2021). However, it is imperative to note that these studies reflect on the impact of AI for SDGs at a high level and often include evidence from experimental closed systems. A deeper analysis is required to understand the role of different actors, practitioners, impacts, social implications, and contribution of AI to specific sub-goals of
466
S. Gupta and A. Degbelo
SDGs, which is scarcely discussed. Additionally, understanding discrepancies between the relevance of AI at the goal level and deeper conflicts amid the need for SDG targets and indicators is essential to realize how key stakeholders could carefully use AI for sustainable development. Overall, flawed understanding of complexities in implementation of AI systems for SDGs, governance hurdles, lack of knowledge about the influence of AI on suitable targets and indicators, and unclear role and responsibilities among stakeholders lead to uncoordinated exercises, thus limiting us from realizing the full potential of technological innovation for sustainable development. Additionally, civic awareness, citizen engagement, ownership, and citizen-centric approaches must be enhanced for inclusive action (Guan et al. 2019; Rubio-Mozos et al. 2019; Thinyane 2018). The remainder of the chapter intends to inform discussions on both AI for SDG11 and for deeper citizen engagement through a systematic analysis of the contributions of past and ongoing projects.
3 Method We critically analyzed existing projects on AI4SG and CORDIS database to synthesize progress and learn about current gaps. The data collection and analysis were done in four steps. Step 1: Data Collection AIxSDGs. We have retrieved all projects from the Oxford Initiative on AIxSDGs, which are related to SDG11. Contrary to the AIxSDGs initiative, which did the mapping at the goal level, the mapping in this work was done at the indicator level. Step 2: Data Collection CORDIS. We have retrieved all projects from the CORDIS database that are related to the theme of the paper. There are 12 possible contributions to search for in the database: “Projects,” “Results Packs,” “Research*EU Magazines,” “Results in Brief,” “News,” “Events,” “Interviews,” “Report Summaries,” “Project Deliverable,” “Project Publications” “Exploitable Results,” and “Programs.” We decided to focus on “Project Deliverables,” “Project Publications,” and “Exploitable Results,” since we are interested in concrete outcomes. Also, focusing on these types of contributions is consistent with the data obtained from AIxSG (step1) because the projects obtained from step 1 share the common feature that they have been successfully implemented on the ground for at least 6 months and have no negative impact measured. Besides, the CORDIS Web application enables the search of results by application domain and offers 11 application domains: “Industrial Technologies,” “Fundamental Research,” “Transport and Mobility,” “Health,” “Society,” “Security,” “Climate Change and Environment,” “Energy,” “Space,” “Digital Economy,” and “Food and Natural Resources.” The two authors went through the 10 targets of SDG 11 and mapped them to the 11 themes of the CORDIS platform. The results of the mapping were 11.1 => (NA), 11.2 => (transportation mobility), 11.3 => (society), 11.4 => (NA),
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
467
11.5 => (climate change and environment), 11.6 => (climate change and environment), 11.7 => (society), 11.a => (NA), 11.b => (NA), and 11.c => (NA). As a result, we searched for the project deliverables, project publications, and exploitation results related to the themes “Transport and Mobility,” “Climate Change and Environment,” and “Society.” “Artificial intelligence” or AI system can be defined in many ways as shown in (Samoili et al. 2020). Hence, the search for AI-related work in the CORDIS database was done using a variety of keywords. We have used two sources for these keywords: keywords pointing at the subdomains of AI suggested by the Joint Research Centre (Samoili et al. 2020) and keywords from the AI Glossary by (Hutson 2017). The search strings used were: • JRC subdomains search string: “Knowledge representation” or “Automated reasoning” or “Common sense reasoning” or “Planning” or “Scheduling” or “Searching” or “Optimisation” or “Computer vision” or “Audio processing” or “Multi-agent systems” or “Robotics” or “Automation” or “Connected vehicles” or “Automated vehicles” or “AI Services” or “AI Ethics” or “Philosophy AI.” • AI glossary search string: “Algorithm” or “Backpropagation” or “Black Box” or “Deep Learning” or “Expert System” or “Generative Adversarial Networks” or “Machine Learning” or “Natural Language Processing” or “Neural Network” or “Neuromorphic Chip” or “Perceptron” or “Reinforcement Learning” or “Strong AI” or “Supervised Learning” or “Tensorflow” or “Transfer Learning” or “Turing Test.” The search on October 3, 2021 returned 333 results. Step 3: Filtering. The results obtained from the CORDIS database were filtered to keep only the projects that have developed AI systems. At this stage, some outcomes (N = 320) from the previous step were excluded, and N = 13 results were included in the final analysis. 16 projects were identified from the AIxSDG database. At the end of this step, 29 projects remained (see Table 1), which were included in the final analysis. Step 4: Coding. For each project selected (steps 1 and 3), we coded the nature of the contribution (what solution is proposed, to whom, and where), the SDG indicators covered (the indicators to which the AI system proposed is relevant), and the citizen-centric challenges to which the AI system is relevant. The coding was done deductively and went through many iterations (i.e., the categories were defined a priory based on the existing scientific and grey literature, and we remained open to extending the original list during the coding if some categories were missed). Autonomous vehicle (i.e., self-driving cars, autonomous drones). As for the beneficiary, we used a relatively coarse categorization based on who pays for the product or system: companies/businesses, government/public sector, and citizens. Prototypes developed during research projects, unless they have a dedicated citizen focus, fell under the category of government/public sector. Deciding on the cities where the solution was deployed proved to be a challenge because of the varying level of
468
S. Gupta and A. Degbelo
Table 1 Overview of the projects and their contributions. Legend: G/PS (government/public sector), C/B (company/business), AI4SDG (data from the AI4SDG database), CORDIS-G (data from the CORDIS database, obtained after the search using the keywords from the AI Glossary, CORDIS-J (data from the CORDIS database, obtained after the search using the keywords from the JRC) Key Type of system beneficiary Target Social impact Robot C/B 11.6 Efficient municipal waste management Prometea Software G/PS 11.b Resource efficiency application (substantial time savings) in the judicial system Brightics AI Software C/B 11.5 Enhanced risk application analysis (natural disasters, weather, social issues) G/PS, 11.6 Improved citizens’ National Fine Dust Software citizens protection against air Forecast Project application, pollutants analysis model Ennet Eye Software C/B 11.b Enhanced building application energy management AIxAI Software C/B 11.b Efficient resource application distribution (transportation, energy) UNIST Heatwave Analysis model G/PS 11.3 Better informed Research human settlement planning FiveAI Autonomous G/PS, 11.2 Enhanced public vehicle citizens transportation infrastructure Optibus Software C/B 11.2 Optimized transit in application cities Seneka Robot G/PS 11.5 Faster disaster rescue operations Breeze Software citizens, G/ 11.6 Better informed air application PS quality monitoring Qucit Software citizens, G/ 11.7 Improved resource application PS finding (parking spaces, bikes) RUBSEE Robot C/B 11.6 Improved waste treatment AMP Robotics Robot C/B 11.6 More efficient recycling (plastic, metals) DiDi Smart Software G/PS 11.2 Enhanced Transportation Brain application transportation services Project name IRBin
Dataset AI4SDG AI4SDG
AI4SDG
AI4SDG
AI4SDG AI4SDG
AI4SDG
AI4SDG
AI4SDG AI4SDG AI4SDG AI4SDG
AI4SDG AI4SDG
AI4SDG
(continued)
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
469
Table 1 (continued) Project name Dynamic and Robust Wildfire Risk Prediction System MEDACTION 4
CLEOPATRA REVAMP
Key Type of system beneficiary Target Social impact Analysis model G/PS 11.5 Enhanced risk analysis (wildfire)
Analysis model, software application Software application Software application
G/PS
11.3
Desertification management strategies
CORDIS-G
G/PS
11.5
CORDIS-G
G/PS
11.5
Enhanced oil pollution monitoring Better informed coastal disaster emergency management Improved urban storm water monitoring Enhanced modelling of snowmelt
DAYWATER
Software application
G/PS
11.5
ENVISNOW
G/PS Analysis model, algorithm Analysis model G/PS
11.5
Cybermove
Autonomous vehicle
G/PS
11.2
geoland
Analysis model G/PS
11.3
SITAR
11.6
CAMELS
Software G/PS application Analysis model G/PS
11.3
MEGAFIRES
Analysis model G/PS
11.5
ECOSIM
Analysis model G/PS
11.6
SPHERE
Analysis model, software application
G/PS
11.5
FLOODMAN
Dataset AI4SDG
11.5
Improved monitoring of water bodies Enhanced public transportation infrastructure Geoinformation services for land monitoring Improved monitoring of toxic waste Estimation of terrestrial carbon sink Improved risk estimation (wildfire) Improved air quality forecasting Enhanced flood risk estimation
CORDIS-G
CORDIS-G
CORDIS-G
CORDIS-G CORDIS-J
CORDIS-J
CORDIS-J CORDIS-J CORDIS-J CORDIS-J CORDIS-J
granularities at which the projects were documented. At times, the location where the solution was deployed was not at all reported. At other times, the solution was deployed in several cities (again here, datasets on the exact locations where it has been deployed were not available or sparsely available). For this reason, we had to resort to some simple rules: (1) include the city when it is explicitly mentioned in
470
S. Gupta and A. Degbelo
the project description or some supplementary material (e.g., demo video) on the Web; (2) when the system has been deployed in many cities (e.g., the Optibus project),1 the city of the headquarter is used as a location for the project; and (3) research projects documented in the CORDIS database often did not report on the deployment sites or had used several sites for cross-validation as is typically the case for European projects. Consistent with the use of the headquarter of the companies above, we have used the headquarters’ location of the coordinating institution of the project. The list of SDG indicators was taken from the UNDESA SDG Indicators Metadata repository (United Nations Department of Economic and Social Affairs (UNDESA) 2015; UNDESA Statistics Division 2021). Finally, the definition of citizen-centric challenges was taken from (Degbelo et al. 2016): deep participation (i.e., working with citizens, not only for them), the data-literate citizenry (i.e., promotion of data literacy skills and the fostering of digital inclusion), pairing quantitative and qualitative data (i.e., the combination of quantitative data with volunteered geographic information by users that is typically qualitative), open standards (i.e., data available as open data, along with the development or promotion of open standards for data collection, analysis, storage, and sharing), personal services (i.e., services adaptive to the abilities, expertise, and needs of individual users), and persuasive interface (i.e., interfaces that raise awareness about, stimulate, or encourage change towards more sustainable behaviors). The results of the coding are presented next.
4 Results We now report on the outcomes of the coding process. The reporting presents some descriptive statistics about the geographic distribution of the projects examined, their key beneficiary, the type of system developed, the target and indicators for which they are relevant, and the citizen-centric challenges they connect to. Interpreting the data and providing some speculative implications is done in Sect. 5. To facilitate readability, the name of the project is left in CAPITALS when the original project acronym was provided in capitals. Else the name of the project is italicized.
https://www.aiforsdgs.org/all-projects/optibus
1
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
471
4.1 Geographic Distribution of AI Projects Figure 1 shows the geographic distribution of the different projects. A safe interpretation of this map (given the different possible interpretations of location, see the discussion in Sect. 3) is that it gives an idea about where past/ongoing AI-powered projects related to the SDG 11 have been initiated.
4.2 Key Beneficiaries of AI Projects The majority of the projects (76%) in the datasets were targeted at the government or the public sector. Examples of this type of project include the Prometea project that led to substantial time savings in the Argentinian judicial system, projects that try to enhance the public transportation infrastructure through the use of autonomous cars (e.g., Cybermove, FiveAI), and several projects that attempt to address the problem of environmental monitoring from different angles (e.g., ENVISNOW for monitoring snowmelt, CAMELS for monitoring terrestrial carbon sink, and SPHERE for flood risk estimation). 24% of the projects targeted improvements for companies/businesses. Examples of these projects include those attempt to address the issue of efficient energy management in cities (e.g., Ennet Eye for building energy management, AIxAI for efficient transportation/energy resource distribution) and projects that attempt to improve waste treatment and management (e.g., the IRBin, RUBSEE). Overall, only a few projects (10%) can be said to address the needs of the civil society: Qucit has developed tools to facilitate the finding of parking spaces and bikes in cities; the National Fine Dust Forecast Project provided applications to inform citizens about the concentration of air pollutants, helping thereby better protect themselves against these pollutants; and the Breeze project strives to provide better information about air quality through its platform.
4.3 Types of Systems Robots (14%) are one type of AI contribution to more sustainable cities. They have been deployed to facilitate waste management (as done, for instance, in the RUBSEE, AMP Robotics, and IRBin projects) or to facilitate rescue operations during disaster management (e.g., the Seneka project). Other contributions are made in the form of software applications, for instance, to facilitate data analysis through an (analytics) platform (see the Brightics AI project) or to speed up work in the judicial domain (e.g., the Prometea project). Software application contributions were more frequent in the dataset (52%). Another type of contribution (38%) is in the form of analysis models (e.g., to predict heatwaves, see the UNIST Heatwave Research project). 2 projects (Cybermove and FiveAI) are concerned with self-driving cars, and one
472
S. Gupta and A. Degbelo
Fig. 1 Geographic distribution of the places where projects in the datasets have been initiated (top: overview; middle: zoom on central European Countries; bottom: zoom on Asian Countries)
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
473
project (i.e., the ENVISNOW project) proposed an algorithm to retrieve snow depth by using artificial neural networks and multifrequency radiometric data from satellites.
4.4 Targets Served by AI Projects Target 11.5 appears more frequently (34%) in the dataset as a result of several projects dealing with disaster mitigation and management as a use case, e.g., the Seneka project mentioned above, the Dynamic and Robust Wildfire Risk Prediction System (predicting wildfire risk from weather data, see (Salehi et al. 2016)), the MEGAFiReS project (fire monitoring with remote sensing images), the FLOODMAN project (flood monitoring), and the CLEOPATRA project (oil and marine pollution). A share of projects (21%) is concerned with shaping more environmentally friendly cities (SDG Target 11.6) through improved waste management/treatment (e.g., IRBin, AMP Robotics, SITAR) or providing “better” information regarding the quality of the air (e.g., the National Fine Dust Forecast Project or Breeze). Our sample had an equal share of projects dedicated to Target 11.2 (sustainable transport systems, 14%) and Target 11.3 (sustainable human settlement planning, 14%). Past/ongoing AI systems relevant to Target 11.2 have been introduced (or are being explored) to optimize transit in cities (e.g., the Optibus project), expand the existing transportation infrastructure through the use of autonomous vehicles (e.g., Cybermove, FiveAI), or services to facilitate the management of the traffic flow (e.g., DiDi Smart Transportation Brain). The four projects relevant to Target 11.3 in our sample contributed with management strategies for desertification (e.g., MEDACTION 4), tools to inform improved human settlement planning (e.g., the UNIST Heatwave Research project), and built tools/models that could be used for improved urban planning (e.g., geoland proposed the Observatory Spatial Planning to “put[…] urban growth on the map,” and CAMELS proposed models for the terrestrial carbon sinks). The remaining three projects contribute to more efficient resource management (i.e., Target 11.b for Ennet Eye and AIxAI) and the unlocking of new possibilities to access urban spaces (i.e., Target 11.7 for Qucit). The connection of the AI projects to the SDG11 targets is shown in Fig. 2.
4.5 Indicators Supported by AI Projects The relevance of the AI projects to the SDG11 indicators is shown in Fig. 3. A glimpse at the figure shows that the data is more skewed towards Indicator 11.5.2 (Target 11.5) and has an almost equal share of items for the Indicators 11.6.1 and 11.6.2 (Target 11.6). More interesting here is that not all projects connected to a target could be assigned to an indicator. This is an issue that may point at the need to expand the list of indicators to cover (important) aspects of sustainable cities that
474
S. Gupta and A. Degbelo
Fig. 2 Projects in the dataset and their relevance to the SDG 11 targets
are not covered currently. To inform future work along those lines, we report on why these projects fit a target but do not fit an indicator. • Ennet Eye (Target 11.b): the system proposed detects “problems such as the unnecessary use of electricity and presents the economic burden and possible solutions to the problems in order to improve energy efficiency”.2 As such, it is a useful solution towards sustainable resource usage, but none of the two Indicators 11.b.1 (number of countries that adopt national disaster risk reduction strategies) and 11.b.2 (proportion of local governments that adopt and implement local disaster risk reduction in line with national disaster risk reduction strategies) would have done justice to that aspect of sustainable electricity usage in the city. This would have fallen rather under SDG7. Target 7.3 reads: “By 2030, double the global rate of improvement in energy efficiency,” and the Indicator 7.3.1 reads: “Energy intensity measured in terms of primary energy and GDP.” A question this raises is whether or not some indicators directly relevant to energy efficiency are needed for monitoring progress on sustainable cities. • AIxAI (Target 11.b): The project allows “real-time area management for efficient resource distribution”.3 Examples of “resources” mentioned in the project description include air conditioning, operation of elevators and escalators, and cleaning and personnel costs. The argument stated above regarding indicators related to the improved energy efficiency in cities applies. • Prometea (Target 11.b): the project achieved substantial time-saving gains through the introduction of digitization/AI in the judicial system.4 This is an https://www.aiforsdgs.org/all-projects/ennet-eye-powered-energylink https://www.aiforsdgs.org/all-projects/aixai-area-information-x-artificial-intelligence 4 https://medium.com/astec/prometea-artificial-intelligence-in-the-judicial-system-of-argentina4dfbde079c4 2 3
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
475
Fig. 3 Projects in the dataset and their relevance to the SDG11 indicators
indirect contribution to the mitigation of climate change. A non-digital process implies much being done in-person and several rounds of in-person travelling to get some information or provide information (e.g., a missing document for the process). A digital process reduces the cost of accessing and exchanging information and reduces the need for travelling/commuting to the benefit of the climate. There is currently no indicator covering the (positive/negative) impact of digitization on climate change (e.g., CO2 emissions). • UNIST Heatwave Research for National Heat Wave Policy (Target 11.3): the project uses artificial intelligence to investigate “detailed thermal characteristics of urban areas”.5 None of Indicator 11.3.1 (ratio of land consumption rate to population growth rate) and Indicator 11.3.2 (proportion of cities with a direct participation structure of civil society in urban planning) would have reflected the value of the contribution. This seems to point at the fact that there are more aspects to sustainable urban planning than land consumption/population growth rate and citizen participation alone. • geoland (Target 3): the project provides several geoinformation services for land monitoring.6 Same argument as just above regarding the aspects of sustainable urban planning. • CAMELS (Target 3): the project focused on monitoring terrestrial carbon sink and its causes.7 The argument above regarding the aspects of sustainable urban planning also applies.
https://www.aiforsdgs.org/all-projects/unist-heatwave-research-national-heat-wave-policy https://cordis.europa.eu/project/id/502871/results 7 https://cordis.europa.eu/article/id/85263-estimating-europes-carbon-dioxide-fluxes 5 6
476
S. Gupta and A. Degbelo
4.6 Contribution to Citizen-Centric Challenges Three projects had contributions relevant to the citizen-centric challenges. Breeze provides services in the area of air quality sensors, air quality data, and air quality analytics. It offers citizens the opportunity to participate by becoming sensor hosts. This is an example of a measure to facilitate deeper citizen participation. The MEDACTION 4 project contributed a Public Participation Geographical Information System (PPGIS) featuring neural network components. Participatory stakeholder workshops were also organized to engage the public with the driving forces and effects of land degradation and desertification. This too is an example of a measure to promote deeper citizen participation. Finally, the DAYWATER project produced the Hydropolis app, which has different types of users (i.e., guest, user, manager, administrator) and adapts the level of information provided according to their background.8 This could be seen as a primitive form of adaptivity/personalization. In general, the number of projects in the dataset which can be said to connect to the citizen-centric challenges is relatively low (10%).
5 Discussion 5.1 Key Takeaways As for the geographic distribution, there are some notable disparities, with Europe over-represented in this dataset and the rest of the world having fewer contributions. This may be a feature of the dataset or a true indication that other countries/continents are doing less regarding AI contributions to more sustainable cities. At this point, we attribute our observations to the fact that half of the data items came from the CORDIS database, which biases it automatically towards European cities. The value of this work is to have provided a snapshot that could be extended towards a more comprehensive picture of AI contributions to more sustainable cities worldwide. Regarding the SDG targets and indicators, a noteworthy observation is that some targets did not appear at all in the sample. This is the case for Target 11.1 (safe and affordable housing), Target 11.4 (protection of the world’s cultural and natural heritage), Target 11.a (strengthening economic, social and environmental links between urban, peri-urban, and rural areas), and Target 11.c (support least developed countries in building sustainable and resilient buildings utilizing local materials). There are also two possible options: these areas have indeed received little attention so far (and hence it is worth exploring the opportunities of digitization to provide some See http://daywater.in2p3.fr/EN/guide/chapter6.php
8
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
477
added value), or the observation is a feature of the bias of the current dataset. Another takeaway from the dataset is that AI systems have indeed contributed to tackling issues of sustainable cities in several ways: waste management, air quality monitoring (and more broadly environmental monitoring), disaster response management, and transportation management. Given that AI systems are on the rise, it can be expected that their number in the areas just mentioned and other areas of sustainable development will increase. Thus, it could be useful to explore ways of documenting best practices for AI implementation/deployment/use in these different areas. The issue is by no means trivial. There are, as the data has shown, different stakeholders with potentially conflicting interests (e.g., companies that may want to preserve what works as a competitive advantage and research that wants to make knowledge available to all). Finally, a key takeaway regarding the citizen-centric challenges is that many projects are still working for citizens (i.e., on their behalf) and not with them (i.e., actively involving them). A reason for this may be the fact that AI for social good is still in its infancy. For instance, several projects mentioned in Sect. 3 dealt with disaster mitigation. This is an endeavor for which the value of involving citizens has been documented in the past (e.g., Zook et al. (2010)). It may be conjectured that as the AI for social good initiatives mature, the involvement of citizens will become more pronounced.
5.2 Limitations A general limitation of the current work is that it has been only descriptive and not explanatory (e.g., we can say little at the moment about why the state of affairs observed has been observed). We have also mentioned that the dataset is biased towards European cities. The method also has inherent limitations: (1) the assignment of locations to the projects a posteriori was subject to some reasonable assumptions but was still arbitrary to some extent: having those locations assigned a priori in a database would provide a more consistent picture of the geographical distributions; (2) the decision whether or not a project was AI-powered was made based on the keywords from the AI Glossary and the JRC: it may well be that some authors doing truly valuable AI work have not used these keywords in the descriptions used for the assessment (i.e., CORDIS and AI4SDG); (3) many projects from the CORDIS database were completed before 2015 when the SDG agenda was agreed upon and thus did not have the SDG goals in mind; and (4) the mapping of the projects to the SDG was done to the most relevant target: it would have been equally possible to list a project under several targets. Finally, we were deliberately interested in SDG11 and mapped the project to the targets and indicators related to SDG11. The contributions of some of the projects apply to more than SDG11, and extending our analysis might unveil interesting patterns about the synergies of SDGs (e.g., which contributions apply to which SDGs simultaneously and which SDGs share how many contributions more often).
478
S. Gupta and A. Degbelo
5.3 Future Work As AI for social good is a new area, the interesting question is that of the evaluation of success. We have pondered this question at the beginning of the work but dropped it from the analysis because it was unclear from the documentation of most projects how the solutions were evaluated. In general, the task of empirically assessing the contributions has proven more challenging than expected because of the lack of homogeneous documentation. The present trend in literature also suggests the lack of reporting towards carbon emission and energy consumption, suggesting adverse impact to the sustainable development efforts (Henderson et al. 2020). There is thus an opportunity for initiatives that (1) offer an ongoing call for AI4SDG projects and (2) provide a simple, structured template for AI systems’ developers to document the value of their work. Such initiatives will be critical in assessing where we are and advancing the science of AI for social good.
6 Conclusion In this chapter, we analyzed the contributions of AI systems to cities and illuminated areas of AI4SG that deserve more attention on the road towards more sustainable cities for SDG 11 and beyond. To help understand the current impacts of AI, the analysis presents the geographic distribution of the AI projects, their key beneficiaries, system type, the target, and indicators for which they are relevant that could be supportive in gaining knowledge about the influence of AI on suitable targets and indicators, type of technologies, their social impact, and responsible stakeholder. We have learned that AI systems have indeed contributed to advancing sustainable cities in several ways (e.g., waste management, air quality monitoring, disaster response management, transportation management), but many projects are still working for citizens and not with them. This current snapshot of the impact of AI projects on SDG 11 has been limited by the quantity and the quality of the available data on existing AI projects. As we move towards more mature work on AI for social good, initiatives that promote consistent and high-quality documentation of AI projects will be vital for a deeper understanding of AI’s impact on more/less sustainable and inclusive cities. Availability of Data and Materials The list of projects presented is available at https://doi.org/10.6084/m9.figshare.17008366. Acknowledgments Dr. Shivam Gupta gratefully acknowledge funding provided by the German Federal Ministry for Education and Research (BMBF) for the project “digitainable.” Dr. Auriol Degbelo gratefully acknowledges funding from the European Social Fund and the Ministry of Economic Affairs, Innovation, Digitalization and Energy of the State of North Rhine-Westphalia through the SmartLandMaps 2.0 project (EFRE-0400389).
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
479
Appendices ppendix A: Definition of the SDG 11 Targets Found A in the Dataset In alphabetical order • 11.b Substantially increase the number of cities and human settlements adopting and implementing integrated policies and plans towards inclusion, resource efficiency, mitigation and adaptation to climate change, resilience to disasters, and develop and implement holistic disaster risk management at all levels. • 11.2 Provide access to safe, affordable, accessible, and sustainable transport systems for all, improving road safety, notably by expanding public transport, with special attention to the needs of those in vulnerable situations, women, children, persons with disabilities, and older persons. • 11.3 Enhance inclusive and sustainable urbanization and capacity for participatory, integrated, and sustainable human settlement planning and management in all countries. • 11.5 Significantly reduce the number of deaths and the number of people affected and substantially decrease the direct economic losses relative to global gross domestic product caused by disasters, including water-related disasters, with a focus on protecting the poor and people in vulnerable situations. • 11.6 Reduce the adverse per capita environmental impact of cities, including by paying special attention to air quality and municipal and other waste management. • 11.7 Provide universal access to safe, inclusive, and accessible green and public spaces, in particular for women and children, older persons, and persons with disabilities.
ppendix B: Definition of the SDG 11 Indicators Found A in the Dataset In alphabetical order • 11.2.1 Proportion of population that has convenient access to public transport, by sex, age, and persons with disabilities • 11.3.2 Proportion of cities with a direct participation structure of civil society in urban planning and management that operate regularly and democratically • 11.5.1 Number of deaths, missing persons, and directly affected persons attributed to disasters per 100,000 populations • 11.5.2 Direct economic loss in relation to global GDP, damage to critical infrastructure, and number of disruptions to basic services, attributed to disasters • 11.6.1 Proportion of municipal solid waste collected and managed in controlled facilities out of total municipal waste generated, by cities
480
S. Gupta and A. Degbelo
• 11.6.2 Annual mean levels of fine particulate matter (e.g., PM2.5 and PM10) in cities (population weighted) • 11.7.1 Average share of the built-up area of cities that is open space for public use for all, by sex, age, and persons with disabilities
References Acuto, M. 2016. Give Cities a Seat at the Top Table. Nature News 537 (7622): 611. Allam, Z., and Z.A. Dhunny. 2019. On Big Data, Artificial Intelligence and Smart Cities. Cities 89: 80–91. Allam, Z., and D.S. Jones. 2020. On the Coronavirus (Covid-19) Outbreak and the Smart City Network: Universal Data Sharing Standards Coupled with Artificial Intelligence (AI) to Benefit Urban Health Monitoring and Management. In Healthcare, vol. 8, 46. Multidisciplinary Digital Publishing Institute, Basel. Antweiler, C. 2019. Local Knowledge Theory and Methods: An Urban Model from Indonesia. In Investigating Local Knowledge, 1–34. Routledge. Aust, H.P. 2019. The Shifting Role of Cities in the Global Climate Change Regime: From Paris to Pittsburgh and Back? Review of European, Comparative & International Environmental Law 28 (1): 57–66. Axinte, L.F., A. Mehmood, T. Marsden, and D. Roep. 2019. Regenerative City-Regions: A New Conceptual Framework. Regional Studies, Regional Science 6 (1): 117–129. Barlacchi, G., M. De Nadai, R. Larcher, A. Casella, C. Chitic, G. Torrisi, F. Antonelli, A. Vespignani, A. Pentland, and B. Lepri. 2015. A Multisource Dataset of Urban Life in the City of Milan and the Province of Trentino. Scientific Data 2 (1): 1–15. Barns, S. 2019. Platform Urbanism: Negotiating Platform Ecosystems in Connected Cities. Springer. Batty, M. 2009. Cities as Complex Systems: Scaling, Interaction, neTworks, Dynamics and Urban Morphologies. In Encyclopedia of Complexity and Systems Science, ed. R. Meyers. New York: Springer. Bibri, S.E. 2021. Data-Driven Smart Sustainable Cities of the Future: An Evidence Synthesis Approach to a Comprehensive State-of-the-Art Literature Review. Sustainable Futures 3: 100047. Chase, A.C. 2020. Ethics of AI: Perpetuating Racial Inequalities in Healthcare Delivery and Patient Outcomes. Voices in Bioethics 6. https://doi.org/10.7916/vib.v6i.5890. Chipofya, M., M. Karamesouti, C. Schultz, and A. Schwering. 2020. Local Domain Models for Land Tenure Documentation and Their Interpretation into the LADM. Land Use Policy 99: 105005. https://doi.org/10.1016/j.landusepol.2020.105005. Cowls, J., A. Tsamados, M. Taddeo, and L. Floridi. 2021a. A Definition, Benchmark and Database of AI for Social Good Initiatives. Nature Machine Intelligence 3 (2): 111–115. https://doi. org/10.1038/s42256-021-00296-0. ———. 2021b. A Definition, Benchmark and Database of AI for Social Good Initiatives. Nature Machine Intelligence 3 (2): 111–115. Croese, S., C. Green, and G. Morgan. 2020. Localizing the Sustainable Development Goals Through the Lens of Urban Resilience: Lessons and Learnings from 100 Resilient Cities and Cape Town. Sustainability 12 (2): 550. Dai, W. 2019. Quantum-Computing with AI & Blockchain: Modelling, Fault Tolerance and Capacity Scheduling. Mathematical and Computer Modelling of Dynamical Systems 25 (6): 523–559.
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
481
Degbelo, A., C. Granell, S. Trilles, D. Bhattacharya, S. Casteleyn, and C. Kray. 2016. Opening Up Smart Cities: Citizen-Centric Challenges and Opportunities from GIScience. ISPRS International Journal of Geo-Information 5 (2): 16. https://doi.org/10.3390/ijgi5020016. Degbelo, A., C. Stöcker, K. Kundert, and M. Chipofya. 2021. SmartLandMaps – From Customary Tenure to Land Information Systems. In FIG e-Working Week 2021 – Challenges in a New Reality. Dinh, T.N., and M.T. Thai. 2018. AI and Blockchain: A Disruptive Integration. Computer 51 (9): 48–53. Ferri, M., U. Wehn, L. See, M. Monego, and S. Fritz. 2020. The Value of Citizen Science for Flood Risk Reduction: Cost–Benefit Analysis of a Citizen Observatory in the Brenta-Bacchiglione Catchment. Hydrology and Earth System Sciences 24 (12): 5781–5798. Firouzi, F., B. Farahani, and A. Marinˇsek. 2021. The Convergence and Interplay of Edge, Fog, and Cloud in the AI-Driven Internet of Things (IoT). Information Systems 107: 101840. Fitjar, R.D., and A. Rodríguez-Pose. 2020. Where Cities Fail to Triumph: The Impact of Urban Location and Local Collaboration on Innovation in Norway. Journal of Regional Science 60 (1): 5–32. Fraisl, D., J. Campbell, L. See, U. Wehn, J. Wardlaw, M. Gold, I. Moorthy, R. Arias, J. Piera, J.L. Oliver, et al. 2020. Mapping Citizen Science Contributions to the UN Sustainable Development Goals. Sustainability Science 15 (6): 1735–1751. Fritz, S., L. See, T. Carlson, M.M. Haklay, J.L. Oliver, D. Fraisl, R. Mondardini, M. Brocklehurst, L.A. Shanley, S. Schade, et al. 2019. Citizen Science and the United Nations Sustainable Development Goals. Nature Sustainability 2 (10): 922–930. Galaz, V., M.A. Centeno, P.W. Callahan, A. Causevic, T. Patterson, I. Brass, S. Baum, D. Farber, J. Fischer, D. Garcia, et al. 2021. Artificial Intelligence, Systemic Risks, and Sustainability. Technology in Society 67: 101741. Guan, T., K. Meng, W. Liu, and L. Xue. 2019. Public Attitudes Toward Sustainable Development Goals: Evidence from Five Chinese Cities. Sustainability 11 (20): 5793. Gupta, S., A. Hamzin, and A. Degbelo. 2018a. A Low-Cost Open Hardware System for Collecting Traffic Data Using Wi-Fi Signal Strength. Sensors 18 (11): 3623. https://doi.org/10.3390/ s18113623. Gupta, S., E. Pebesma, J. Mateu, and A. Degbelo. 2018b. Air Quality Monitoring Network Design Optimisation for Robust Land Use Regression Models. Sustainability 10 (5): 1442. https://doi. org/10.3390/su10051442. Gupta, S., S.D. Langhans, S. Domisch, F. Fuso-Nerini, A. Felländer, M. Battaglini, M. Tegmark, and R. Vinuesa. 2021. Assessing Whether Artificial Intelligence is an Enabler or an Inhibitor of Sustainability at Indicator Level. Transportation Engineering 4: 100064. Henderson, P., J. Hu, J. Romoff, E. Brunskill, D. Jurafsky, and J. Pineau. 2020. Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. Journal of Machine Learning Research 21 (248): 1–43. Hilbert, M. 2016. Big Data for Development: A Review of Promises and Challenges. Development Policy Review 34 (1): 135–174. Hutson, M. 2017. AI Glossary: Artificial Intelligence, in So Many Words. Science 357 (6346): 19–19. https://doi.org/10.1126/science.357.6346.19. Ismagilova, E., L. Hughes, Y.K. Dwivedi, and K.R. Raman. 2019. Smart Cities: Advances in Research—An Information Systems Perspective. International Journal of Information Management 47: 88–100. Israilidis, J., K. Odusanya, and M.U. Mazhar. 2021. Exploring Knowledge Management Perspectives in Smart City Research: A Review and Future Research Agenda. International Journal of Information Management 56: 101989. Kirwan, C.G., and F. Zhiyong. 2020. Smart Cities and Artificial Intelligence: Convergent Systems for Planning, Design, and Operations. Elsevier.
482
S. Gupta and A. Degbelo
Kuffer, M., D.R. Thomson, G. Boo, R. Mahabir, T. Grippa, S. Vanhuysse, R. Engstrom, R. Ndugwa, J. Makau, E. Darin, et al. 2020. The Role of Earth Observation in an Integrated Deprived Area Mapping “System” for Low-to-Middle Income Countries. Remote Sensing 12 (6): 982. Kuffer, M., J. Wang, D.R. Thomson, S. Georganos, A. Abascal, M. Owusu, and S. Vanhuysse. 2021. Spatial Information Gaps on Deprived Urban Areas (Slums) in Low-and-Middle- Income-Countries: A User-Centered Approach. Urban Science 5 (4): 72. Li, L., X. Xia, B. Chen, and L. Sun. 2018. Public Participation in Achieving Sustainable Development Goals in China: Evidence from the Practice of Air Pollution Control. Journal of Cleaner Production 201: 499–506. Magni, G. 2017. Indigenous Knowledge and Implications for the Sustainable Development Agenda. European Journal of Education 52 (4): 437–447. Majumdar, S., M.M. Subhani, B. Roullier, A. Anjum, and R. Zhu. 2021. Congestion Prediction for Smart Sustainable Cities Using IoT and Machine Learning Approaches. Sustainable Cities and Society 64: 102500. Makondo, C.C., and D.S. Thomas. 2018. Climate Change Adaptation: Linking Indigenous Knowledge with Western Science for Effective Adaptation. Environmental Science & Policy 88: 83–91. Martens, J. 2019. Revisiting the Hardware of Sustainable Development. Reshaping 11–19. Micheletti, M., D. Stolle, and D. Berlin. 2014. Sustainable Citizenship: The Role of Citizens and Consumers as Agents of the Environmental State. In State and Environment: The Comparative Study of Environmental Governance, 203–236. Munsaka, E., and E. Dube. 2018. The Contribution of Indigenous Knowledge to Disaster Risk Reduction Activities in Zimbabwe: A Big Call to Practitioners. Jàmbá: Journal of Disaster Risk Studies 10 (1): 1–8. Newman, G., T. Shi, Z. Yao, D. Li, G. Sansom, K. Kirsch, G. Casillas, and J. Horney. 2020. Citizen Science-Informed Community Master Planning: Land Use and Built Environment Changes to Increase Flood Resilience and Decrease Contaminant Exposure. International Journal of Environmental Research and Public Health 17 (2): 486. Nilsson, M., D. Griggs, and M. Visbeck. 2016. Policy: Map the Interactions Between Sustainable Development Goals. Nature News 534 (7607): 320. Palomares, I., E. Martínez-Cámara, R. Montes, P. García-Moral, M. Chiachio, J. Chiachio, S. Alonso, F.J. Melero, D. Molina, B. Fernández, et al. 2021. A Panoramic View and Swot Analysis of Artificial Intelligence for Achieving the Sustainable Development Goals by 2030: Progress and Prospects. Applied Intelligence 51: 6497–6527. Pekmezovic, A. 2019. The UN and Goal Setting: From the MDGs to the SDGs. In Sustainable Development Goals: Harnessing Business to Achieve the SDGs Through Finance, Technology, and Law Reform, 17–35. Rabah, K. 2018. Convergence of AI, IoT, Big Data and Blockchain: A Review. The Lake Institute Journal 1 (1): 1–18. Rajan, K., and A. Saffiotti. 2017. Towards a Science of Integrated AI and Robotics. Artificial Intelligence 247: 1–9. Reddick, C.G., R. Enriquez, R.J. Harris, and B. Sharma. 2020. Determinants of Broadband Access and Affordability: An Analysis of a Community Survey on the Digital Divide. Cities 106: 102904. Rogers, B., G. Dunn, K. Hammer, W. Novalia, F. de Haan, L. Brown, R. Brown, S. Lloyd, C. Urich, T. Wong, et al. 2020. Water Sensitive Cities Index: A Diagnostic Tool to Assess Water Sensitivity and Guide Management Actions. Water Research 186: 116411. Rubio-Mozos, E., F.E. García-Muiña, and L. Fuentes-Moraleda. 2019. Rethinking 21st-Century Businesses: An Approach to Fourth Sector SMEs in Their Transition to a Sustainable Model Committed to SDGs. Sustainability 11 (20): 5569. Salehi, M., L.I. Rusu, T.M. Lynar, and A. Phan. 2016. Dynamic and Robust Wildfire Risk Prediction System: An Unsupervised Approach. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016), San Francisco, California,
An Empirical Analysis of AI Contributions to Sustainable Cities (SDG 11)
483
USA, ed. B. Krishnapuram, M. Shah, A. J. Smola, C. C. Aggarwal, D. Shen, and R. Rastogi, 245–254. ACM. Samoili, S., M. López-Cobo, E. Gómez, G. De Prato, F. Martínez-Plumed, and B. Delipetrev. 2020. AI Watch Defining Artificial Intelligence. Luxembourg: Publications Office of the European Union. Saner, R., L. Yiu, and M. Nguyen. 2020. Monitoring the SDGs: Digital and Social Technologies to Ensure Citizen Participation, Inclusiveness and Transparency. Development Policy Review 38 (4): 483–500. Șerban, A.C., and M.D. Lytras. 2020. Artificial Intelligence for Smart Renewable Energy Sector in Europe—Smart Energy Infrastructures for Next Generation Smart Cities. IEEE Access 8: 77364–77377. Shahid, N., M.A. Shah, A. Khan, C. Maple, and G. Jeon. 2021. Towards Greener Smart Cities and Road Traffic Forecasting Using Air Pollution Data. Sustainable Cities and Society 72: 103062. Sharda, S., M. Singh, and K. Sharma. 2021. Demand Side Management Through Load Shifting in IoT Based Hems: Overview, Challenges and Opportunities. Sustainable Cities and Society 65: 102517. Solecki, W., C. Rosenzweig, S. Dhakal, D. Roberts, A.S. Barau, S. Schultz, and D. Urge-Vorsatz. 2018. City Transformations in a 1.5 c Warmer World. Nature Climate Change 8 (3): 177–181. Sougkakis, V., K. Lymperopoulos, N. Nikolopoulos, N. Margaritis, P. Giourka, and K. Angelakoglou. 2020. An Investigation on the Feasibility of Near-Zero and Positive Energy Communities in the Greek Context. Smart Cities 3 (2): 362–384. Taddeo, M., A. Tsamados, J. Cowls, and L. Floridi. 2021. Artificial Intelligence and the Climate Emergency: Opportunities, Challenges, and Recommendations. One Earth 4 (6): 776–779. Tajunisa, M., L. Sadath, and R.S. Nair. 2021. Nanotechnology and Artificial Intelligence for Precision Medicine in Oncology, Artificial Intelligence, 103–122. CRC Press. Thinyane, M. 2018. Engaging citizens for sustainable development: a data perspective. United Nations University Institute on Computing and Society. Tomašev, N., J. Cornebise, F. Hutter, S. Mohamed, A. Picciariello, B. Connelly, D.C. Belgrave, D. Ezer, F.C. van der Haert, F. Mugisha, et al. 2020. AI for Social Good: Unlocking the Opportunity for Positive Impact. Nature Communications 11 (1): 1–6. UNDESA Statistics Division. 2021. SDG Indicators Metadata Repository. https://unstats.un.org/ sdgs/metadata/?Text= & Goal=11 & Target=. Accessed 30 Sept 2021. United Nations. 2015. Sustainable Development Goals: 17 Goals to Transform Our World. United Nations [Online]. Available: https://www.un.org/sustainabledevelopment/energy/. Accessed 04 June 2018. United Nations Department of Economic and Social Affairs (UNDESA). 2015. Transforming Our World: The 2030 Agenda for Sustainable Development. van Wynsberghe, A. 2021. Sustainable AI: AI for Sustainability and the Sustainability of AI. AI and Ethics 1(3), 213–218. Villagra, A., E. Alba, and G. Luque. 2020. A Better Understanding on Traffic Light Scheduling: New Cellular Gas and New In-Depth Analysis of Solutions. Journal of Computational Science 41: 101085. Vinuesa, R., H. Azizpour, I. Leite, M. Balaam, V. Dignum, S. Domisch, A. Felländer, S.D. Langhans, M. Tegmark, and F.F. Nerini. 2020. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nature Communications 11 (1): 1–10. Walker, J., A. Pekmezovic, and G. Walker. 2019. Sustainable Development Goals: Harnessing Business to Achieve the SDGs Through Finance, Technology and Law Reform. Wiley. Yatoo, S.A., P. Sahu, M.H. Kalubarme, and B.B. Kansara. 2020. Monitoring Land Use Changes and Its Future Prospects Using Cellular Automata Simulation and Artificial Neural Network for Ahmedabad City, India. GeoJournal 87, 765–786. Yigitcanlar, T., M. Kamruzzaman, M. Foth, J. Sabatini-Marques, E. da Costa, and G. Ioppolo. 2019. Can Cities Become Smart Without Being Sustainable? A Systematic Review of the Literature. Sustainable Cities and Society 45: 348–365.
484
S. Gupta and A. Degbelo
Zheng, C., J. Yuan, L. Zhu, Y. Zhang, and Q. Shao. 2020. From Digital to Sustainable: A Scientometric Review of Smart City Literature Between 1990 and 2019. Journal of Cleaner Production 258: 120689. Zook, M., M. Graham, T. Shelton, and S. Gorman. 2010. Volunteered Geographic Information and Crowdsourcing Disaster Relief: A Case Study of the Haitian Earthquake. World Medical Health Policy 2 (2): 2. https://doi.org/10.2202/1948-4682.1069.
Index
A Abdrisaev, B., 270–284 Acemoglu, D., 186 Adeniyi, A.A., 139 Adeshina, S.A., 14, 16, 19, 133–141 Africa, 5, 14, 16, 36, 38, 51, 101, 103–105, 109, 110, 134–141, 157, 208, 209, 222, 381, 414, 417, 418 Agarwal, S., 380–395 Agrast, M.D., 140 Agricultural management, 21, 381 Agriculture, 6, 19–21, 27, 36, 37, 40, 45, 50, 73, 80, 100, 103, 152, 159, 161, 172, 198, 235, 273, 278, 283, 380–395, 402–405, 408, 413, 415, 417, 418, 427 AI for climate change, 408–409 AI for SDGs, 10–14, 16, 18, 21, 23–25, 28, 29, 44–60, 298–301, 400, 463–465 AI for social good, 4, 5, 10, 14, 21, 22, 24, 25, 44, 45, 147, 162, 232, 235–239, 245, 339, 477, 478 AI in community, 19, 44–60 Aina, O., 133–141 AI4SDG, 468–469, 477, 478 Algorithmic art, 25, 329, 332, 335, 336, 339–342 Al Tamimi, Y., 256 Amran, A., 221 Amsterdam, G., 116 Analytical Hierarchy Process, 18, 366, 370 Andrenelli, A., 194
Aristotle, 99, 121 Artificial intelligence (AI), 9, 35, 44, 66, 100, 117, 135, 167, 185, 204, 232, 254, 270, 292, 328, 349, 367, 381, 400, 424, 462 Artificial neural network (ANN), 6, 148, 295, 391, 425–427, 429, 431, 433–437, 471 Ayaz, M., 384 Azizpour, H., 37, 66–85 B Bakker, C.A., 352 Ban, Y., 66–85 Bayesian network (BN), 149, 150 Bayes, T., 149 Ben Ayed, R., 384 Berg, M.R. Van den, 352 Bezos, J., 40 Big Tech, 6, 11, 15–19, 44, 45, 47, 49–52, 57–59, 220, 238, 243, 244 Big Tech corporations, 16, 50, 232, 238, 239, 241, 243–245 Bioethics, 122, 123 Blaiklock, A., 28 Boserup, E., 100 Botero, A., 140 Boutilier, R.G., 242 Braidotti, R., 341 Bressanelli, G., 348 Brussels, 168
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi, L. Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series 152, https://doi.org/10.1007/978-3-031-21147-8
485
486 Burch, K., 278 Business model, 5, 6, 10, 15, 17, 18, 37, 49, 204, 232, 238–245, 331, 348–355, 357–359, 366 Buston, O., 69 C Capasso, M., 16, 17, 233 Carney, M., 40 Chan, Y.J., 146–162 Cheah, S.-M., 146–162 Chomanski, B., 16 Cicero, M.T., 121 Circular economy, 18, 81, 185, 220, 349, 359 Cirillo, 296 Citizen participation, 77, 465, 475, 476 Climate change, 5, 10, 26, 38–41, 45, 72, 79–81, 83, 85, 126, 135, 159–161, 179, 205, 207, 210, 220, 222, 233, 273, 281, 282, 348, 366, 375, 380, 385, 387, 390, 402, 404–408, 410, 418, 442, 462, 463, 465–467, 474, 479 Collective intelligence, 38, 39 Conrad Foley, J., 400–419 Cowls, J., 462 Crawford, K., 51 D Daño, N., 241 Darwin, C., 427 Decision science, 366 Degbelo, A., 26, 462–478 DeGhetto, K., 134 del Rio, B., 12, 14, 24 del Rio, V.B., 168–180 De, R.K., 431 Digital archives, 260 Digital self-determination, 19, 20, 54, 55, 57, 58, 60 Dignity, 20, 26, 48, 57, 60, 116, 117, 120–124, 127–129, 216, 220, 263 Discrimination, 19, 47, 48, 51, 54, 172, 176, 261, 299, 464 Doughnut economics, 116–120, 123, 124, 128, 129 Dziri, J., 27, 305–323 E Eccles, R.G., 211 Efremova, N., 400–419
Index Eivazi, H., 66–85 Empowerment, 54, 55, 60, 331 Environment, 4–6, 49, 57, 66, 68, 72, 73, 75–78, 84, 118, 123, 127, 154, 159, 169, 174, 206, 209, 214, 220, 221, 234, 235, 241, 254, 255, 263, 265, 270, 272, 274–276, 283, 295, 300, 306, 329, 331, 334, 338, 341, 348, 350, 352, 359, 368, 376, 383, 395, 404, 408, 427, 463, 466, 467 Environment, society, and government (ESG), 6, 17, 25, 205, 207, 210–213, 219, 221, 348 Ethical AI, 25, 335, 340 Ethics, 3–7, 11, 14, 21, 44, 52, 59, 70, 71, 111, 120–123, 127, 129, 169, 178, 204, 212, 213, 215, 276, 282, 283, 298, 331, 332, 335, 341, 376, 467 Experimental museology, 329, 333, 335 Ezzedine, T., 305–323 F Fang, H., 66–85 Fan, Z., 185, 196 Farooq, M., 385 Farrell, M., 193 Feature selection, 431 Fenech, M.E., 69 Financial inclusion, 17, 146–148, 157, 158, 162, 196, 204, 205, 207, 208, 210, 213–216, 218, 219, 222 Findlay, M., 5, 19, 44–60 Fink, L., 40 Fintech, 204, 206–211, 214, 216–219, 221, 222 Finucan, L., 140 Floridi, L., 3–7, 10–29, 232, 235, 261, 331 Fomunyam, K.G., 109 Food productivity, 21, 160 Forti, M., 23, 254–265 Fourth Industrial Revolution, 184, 189, 199, 205 Fraisl, D., 465 Fuso Nerini, F., 66–85 G García-Micó, T.G., 27, 292–301 Gates, B., 40 Gay, D., 256 Gebru, T., 328 Gender, 6, 48, 70, 107, 120, 127, 155, 172–176, 204, 215, 235, 261, 263,
Index 277, 279, 280, 292–301, 332, 339, 386, 391, 425, 442, 447 Genetic algorithm (GA), 425, 427–428, 433–435 Ghoreishi, M., 348–359 Glenn, L.M., 270–284 Global Citizenship Education (GCED), 146–148 Global pandemic, 47 Global reports, 424, 429, 435, 438 Golzar, F., 66–85 Good AI society, 45 Good health, 5, 27, 47, 66, 69, 70, 84, 134–136, 175, 292, 298, 425, 446 Goralski, M.A., 98–111 Governance, 5, 6, 10–17, 19, 22, 25, 28, 36, 37, 45, 53, 57, 85, 134, 139, 151, 185, 188, 189, 204–223, 239, 243, 244, 259, 274, 333, 466 Gray, J.R., 134 Greene, D., 240 Griggs, D., 205 Güemes, A., 77 Gupta, K., 442 Gupta, S., 26, 66–85, 462–478 Gwagwa, A., 50 H Hall, S., 255 Hamilton, M., 337 Hammarskjöld, D., 330 Hanana, M., 384 Happonen, A., 350, 351 Haraway, D.J., 341 Health, 19, 20, 22, 23, 27, 36, 37, 47, 66–70, 73, 80, 82, 85, 101–106, 109, 111, 118, 120, 135–137, 139–141, 146, 147, 151–156, 159, 171, 198, 205, 214, 216, 238, 244, 273, 292–301, 306, 335, 355, 356, 384, 405, 408, 409, 414, 418, 425, 452, 458, 462–464, 466 Herweijer, C., 193 Hoffman, R.L. Dr., 102 Holland, J., 427 Honkela, T., 367 Honnibal, M., 450 Hosseini, Z., 76 How, M.-L., 25, 146–162
487 I Identity, 6, 23, 59, 205, 254–265, 293, 332, 333, 335 Illingworth, S.J., 76 Impact assessment, 11, 17, 25–28, 195, 198, 223, 366, 368, 369, 375, 376 Inclusive student-engaged learning, 268–284 Industry 4.0, 18, 349, 350, 359 Inequality, 5, 15, 19, 24, 25, 27, 44–60, 69, 70, 85, 98–111, 119, 146, 172–176, 179, 184, 213, 220, 223, 239, 254, 262, 271, 277, 292, 298, 300–301, 330, 334, 337, 391, 394, 425, 443, 463, 464 Innovation, 15, 25, 27, 36–38, 40, 45, 51–55, 98, 100–103, 105, 108–111, 135, 141, 146, 168, 173, 177, 180, 187, 195, 196, 204–207, 220, 222, 223, 233, 234, 239, 242, 271, 277, 292, 298, 300–301, 329, 334, 336, 348, 349, 354, 358, 402, 425, 435, 437, 463, 466 Insights, O., 138 Israilidis, J., 462 J Jaynes, T.L., 20, 270–284 Jean, N., 75 Jensen, F.V., 149 Jo, E.S., 328 Jordan, T., 46 Joseph, D., 100 Joseph, L., 243 K Kant, I., 121 Kaplan, D., 149 Karimi, R., 400–419 Karo, E., 366–377 Kashmere, B., 328 Khandelwal, 381 Kheradmand, E., 442 Khor, A.C., 146–162 Khulief, Y.A., 308 Kiggundu, M.N., 134 Kleinman, Z., 102 Koditala, N.K., 307 Korb, K.B., 149 Kramer, O., 427
488 Kwet, M., 50 Kwok, R., 38 L Larose, D.T., 431 Laukyte, M., 292–301 Leak detection, 306, 308, 320, 322, 323 Leite, I., 37, 66–85 Lejarraga, I., 192 Levy, R., 149 Liiv, I., 17, 366–377 Lumley, J.L., 76 Lung, N., 102 M Mallor, F., 66–85 Malthus, T.R., 99, 100 Manera, A., 186 Marr, B., 272 Marshall, Z., 293 Mas, F., 382 Matus, K., 188 Mazzi, F., 3–7, 10–29 McCarthy, J., 100 McGregor, A.J., 297 McKenzie, B., 168–180 Melsion, G.I., 66–85 Migrants, 28, 218, 255, 259, 262 Mirghaderi, S.-H., 424–438 Mishra, J.L., 353 Monitoring, 16, 18, 21, 23, 25–28, 40, 45, 67, 71, 77, 81, 85, 194, 208, 216, 243, 273, 300, 306–308, 310, 311, 348, 350, 355, 356, 382–384, 400, 405, 407–410, 414, 417–419, 462, 463, 465, 468, 469, 471, 473–476, 478 Montani, I., 450 Morin, E., 340 Mountain communities, 6, 20, 270–284 Mulgan, G., 15, 35–41 Mulugetta, Y., 190 Murphy, K., 70 Museums, 6, 25, 328–341 N Nagaraju, U., 380–395 Naicker, S., 135 Named entity recognition (NER), 445–451, 453, 454, 456, 457
Index Natural language processing (NLP), 16, 38, 138, 148, 204, 211, 213, 300, 445, 449–452, 467 Natural language understanding, 450 New technologies, 13, 39, 49, 60, 100, 105, 111, 175, 177, 179, 184–188, 191, 195, 199, 235, 270, 274, 331 Ng, A., 135 Nicholson, A.E., 149 Non-discrimination, 49, 261 Non-traditional students, 280, 283 Nussbaum, M.C., 122 O Ong, L.M., 44–60 Open data, 59, 191, 193, 444, 470 P Pal, N.K., 431 Pal, S.K., 431 Pashang, S., 204–223 Pasquale, F., 239 Peace and justice, 120, 127, 140 Peras, M., 442 Person, 22, 176, 177, 211, 216, 254–257, 260, 262, 292, 295, 330, 364, 376, 402, 479, 480 Plato, 99 Ponce, A., 140 Poverty alleviation, 12, 98–111 Power, 13, 16, 18, 20, 40, 41, 44–60, 83, 98, 99, 101, 107, 125, 140, 174, 207, 212, 213, 220, 222, 234, 239, 241, 242, 244, 256, 261, 307, 328, 329, 333–335, 367, 377, 395, 406 Prestes, E., 380–395 Prifti, K., 115–129 Public policy, 180 Public-private partnerships, 41 Pynnönen, M., 348–359 Q Quality, 18–21, 24, 26–28, 48, 56, 73, 77, 83, 85, 104–110, 139, 141, 147, 151, 152, 155, 157, 160, 175, 195, 198, 206, 221, 277, 306, 307, 310, 313, 314, 316, 320, 323, 348, 353, 381, 382, 384, 385, 390, 393, 402, 406,
Index
489 407, 425, 431, 437, 443, 448, 456, 458, 462, 463, 465, 468, 469, 471, 473, 475, 476, 478, 479
R Rajesh, N., 140 Rangeland monitoring, 6, 400 Ravallion, M., 99 Raworth, K., 123 Restrepo, P., 186 Reynolds, R.G., 329 Riley, P., 255, 256, 261 Rubio, V., 382 S Saaty, T., 17, 366 Saaty, T.L., 369 Sætra, H.S., 240 Say, E.M.P., 146–162 Schulze, E., 244 SDG 2, 425 SDG 11, 425, 462–478 Self, 255, 256, 261 Serafeim, G., 211 Sharma, 18 Sierra, E.B., 140 Sierra, L.A., 10 Singha, N., 380–395 Sirmacek, B., 26, 66–85 Sitra, 348 Smith, K., 66–85 Smith, M., 240 Snower, D., 193 Snow, J., 39 Social license, 6, 232–245 Soe, R.-M., 366–377 Sotelo, J., 185, 196 Sottoriva, A. Dr., 293 Sperotto, A., 149 Spezzatti, A., 442–458 Stephenson, M., 13, 184–199 Stilgoe, J., 52 Straub, V., 41 Stroehle, J., 211 Strubell, E., 300 Stylianou-Lambert, T., 333, 334 Sustainability, 3, 6, 10, 15, 16, 25, 26, 28, 44, 49, 52–56, 60, 78, 100, 111, 115, 120, 146, 148, 151–154, 159–162, 174, 178, 186, 189, 195, 196, 198, 205, 206, 211, 212, 220, 221, 223, 239, 241–243, 245, 300, 301,
328–342, 348, 349, 352, 353, 357, 385, 389, 394, 424, 425, 427, 444, 445, 457, 464, 465 Sustainable cities, 7, 26, 66, 75, 84, 171, 233, 277, 402, 425, 462–478 Sustainable development, 4, 27, 44–46, 48, 50, 53, 60, 69, 71, 83, 98, 110, 115, 116, 134, 141, 146, 148, 151, 152, 162, 171–174, 184–199, 204–207, 212, 213, 220, 222, 223, 233, 240, 245, 270–284, 328–332, 334, 335, 340, 341, 348, 388, 400, 424, 429, 437, 442, 457, 462–466, 476, 478 Sustainable Development Goals (SDGs), 4, 9, 35, 44, 66, 98, 116, 133, 168, 184, 204, 232, 254, 270, 292, 331, 348, 366, 383, 400, 424, 462 Sustainable development goals index (SDGI), 6, 151–154, 424–426, 429, 431, 432, 435, 437, 438 Sustainable finance, 16–19, 204–223 Sustainable Technology Board (STB), 13, 184, 186–189, 195, 196, 199 SusTech, 184–199 SusTech solutions, 184–187, 190, 191, 193–199 T Taddeo, M., 10–29 Tahhan, A.S., 297 Tan, T.K., 98–111 Taurino, G., 328–342 Techno-colonialism, 49–52 Teplov, R., 348–359 Thomson, I., 242 Timpson, W.M., 279 Treves, L., 348–359 Twomey, P., 193 U Umbrello, S., 16, 17, 236 Urbinati, A., 353 V Value creation, 18, 348, 349, 352–356, 358, 359 van de Poel, I., 236 Vapnik, V.N., 316 Victor, D.G., 40 Vinuesa, R., 10, 37, 66–85, 240
490 W Waal, A. de, 134 Water, 6, 21, 27, 73, 74, 85, 104, 119, 120, 125, 135, 152, 161, 178, 272, 273, 277, 305–323, 356, 380, 381, 383, 387–389, 393, 402, 404–408, 410, 413, 415, 425, 446, 462, 464, 469 Weber, O., 204–223 Whittaker, M., 49 Williams, C., 28 Wireless sensor network (WSN), 306, 307, 309, 310, 312 Wyatt, L.G., 279
Index Y Yang, R., 108 Yarime, M., 184–199 Yaya, S., 103 Yoo, Y., 349 Z Zaminpeyma, R., 442 Zhan, J., 184–199 Ziesche, S., 380–395 Zuboff, S., 49