120 95 3MB
English Pages 166 [160] Year 2024
Synthesis Lectures on Computer Science
Carlo Lipizzi
Societal Impacts of Artificial Intelligence and Machine Learning
Synthesis Lectures on Computer Science
The series publishes short books on general computer science topics that will appeal to advanced students, researchers, and practitioners in a variety of areas within computer science.
Carlo Lipizzi
Societal Impacts of Artificial Intelligence and Machine Learning
Carlo Lipizzi Stevens Institute of Technology Hoboken, NJ, USA
ISSN 1932-1228 ISSN 1932-1686 (electronic) Synthesis Lectures on Computer Science ISBN 978-3-031-53746-2 ISBN 978-3-031-53747-9 (eBook) https://doi.org/10.1007/978-3-031-53747-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Source: Base: Wikimedia Commons; Description: Surf_0952; Source: Surf_0952; Author: Bengt Nyman from Vaxholm, Sweden; Date: 26 February 2008, 17:00; licensed under the terms of the cc-by-2.0 Edited using Picsart
Preface
Panta rei: “everything flows”. The world is in a constant state of change. Our body is changing, our society is changing, and the technology we use is changing in an evolutionary way. I consider myself a “mindful technology early adopter”: I don’t adopt technologies for novelty, but I like experimenting with what is coming. My Commodore 64 was a very cool tool in the early 80s. In the late 80s we had the web, with its promise to be the repository of the world, the modern version of the ancient library of Alexandria. At pretty much the same time cellular phones, now heroes and villains in our lives. Then social media, streaming services, and online collaboration. Now is “Artificial Intelligence” and Machine Learning that are finding a place in our lives and in society. Panta rei: “Nothing is lost, nothing is created, everything is transformed” (Antoine Lavoisier). We use machines with some form of “logic” every day. Most of the time we do not call it “Artificial Intelligence”, but just good software. Autopilots are controlling airplanes for a large part of the operations. Our cars use a large amount of embedded logic to run. Over time, we added functionalities to serve us better. The current AI/ML is a layer of technology built on top of what was already there, and it is no endpoint but a steppingstone for the next layer. Research, the foundation for new technology, is always built on top of what has already been done in that area. Once the goal is reached, the topic is ready for anyone who will take it to the next stage. The coming new breed of “intelligent” systems will likely induce radical changes in some areas, but those new elements may not look as sci-fi-like as expected, just like a washer or a modern refrigerator is a robot with no legs and arms, as robots have in some movies. In most cases, AI/ML will be injected into existing systems, providing new functionalities. That may not be a minor improvement; it may generate radical changes in some areas. What we see in recent times is an acceleration of the technological evolutionary process, primarily because of the increased availability of data and computing power.
vii
viii
Preface
There would be no Machine Learning without data, and there wouldn’t be enough data without the Web. There would be no ML without the current computing power, the power of the languages we use, or the foundational algorithms we developed decades ago. But here we are, with the first mass deployment of an AI/ML tool, ChatGPT, in late 2022, reaching over 100 million users in early July 2023, after just about seven months. Fueled by the great expectations created by a surge in the number of movies, TV shows, and articles, people are thinking more and more if this is going to change their lives and the fabric of our society, somehow, believing it will, and maybe even starting tomorrow. As AI continues to evolve, it is essential to understand its capabilities, limitations, and ethical implications. AI systems are constantly becoming more advanced, capable of performing complex tasks, analyzing vast amounts of data, and making decisions, sometimes on our behalf. However, this rapid development also raises important questions about privacy, fairness, transparency, and the potential consequences of unleashing AI into our everyday lives. This book focuses on some vertical areas, such as health care, transportation, finance, and education, exploring how AI impacts these industries. We examine the promises and challenges of AI, offering insights into its potential to enhance our lives, improve efficiency, and face societal issues. From personalized medicine to autonomous vehicles, from robo-advising to smart cities, we discover the many ways AI is impacting and will impact our world. This book is not just about the advancements of AI. It is also a call for responsible and ethical AI deployment. As we embrace the potential of AI, we must navigate the complexities of regulation, job displacement, bias, and privacy to ensure that AI serves humanity’s best interests. Through sci-fi-like short stories, I imagine the applications of AI/ML in specific areas and then I fact-check to see how close or far we are from these fictional scenarios. In the conclusions, I analyze what AI/ML can do for specific types of users: students, professionals, consumers, and executives. This is to explore the opportunities and challenges that lie ahead. This book has no beginning and no end. It serves as a snapshot of the current moment, providing some practical insights, and could also act as a foundation for understanding future evolution of those technologies. Panta rei. Hoboken, USA
Carlo Lipizzi
My Journey into the World of AI and Society
Writing this book was for me both a challenge and a learning experience. The first lesson learned was to avoid writing a book on a complex topic as a single author if you do not have a significant amount of time to dedicate to it. I thought it would take three. I was plain wrong. And the longer it takes, the more things are happening in an area where obsolescence is measured by the days. The more things are happening, the more things you want to incorporate. It could easily become a never-ending task. The second issue was how to create something worth reading in a field where so much has been already written: there were over 250,000 publications on AI in 2022, including journal articles, conference papers, other magazines, and books (intelligence, 2022). I tried to make it as much “personal” as possible, bringing my point of view, based on my experience in the field from the “second wave” of AI in the 80s, through the “AI winter” and up to my more recent role in Academia, researching and teaching in this area. I also always loved reading and writing sci-fi-like stories, and I added future fictional scenarios for the different applications of AI. Not being a sci-fi book, I then added a fact-checking for each story: how much of that scenario is available today and in the near future. I then asked myself how people could use these emerging AI tools in their day-to-day lives. And I created scenarios: what you can get if you are a student, a professional or other. Overall, I tried to give the reader elements to understand the technology, its impacts on our constantly evolving society, the current and potential future applications as well as the opportunities for categories of individuals. The more I dove into the different subtopics, the more I realized the amount of details I needed to make the narrative supported by facts. I didn’t have a research assistant for this book, but I had good support from the old and new searching technologies, from the good-old Google to ChatGPT, Bard, and Picsart. ChatGPT and Bard are great sidekicks, bouncing back ideas and framing stories. For example, how impactful was the Industrial Revolution and how can I compare it with the “AI Revolution”? I needed facts, metrics to compare. Unfortunately, ChatGPT and Bard are not yet quite reliable, and you need to cross-reference the answers, because some points are wholly made up. The lack of, or the wrong references were another reason to review with Google before using the answers. Nevertheless, it was quite some help. ix
x
My Journey into the World of AI and Society
Amazingly interesting is how image editors can now benefit from AI. Tools like Picsart can make traditional image editing tasks very easy (like removing backgrounds) but they also allow AI-driven image modifications, for example generating new backgrounds based on a combination of free text and given keywords. FYI, I didn’t receive any sponsorship from Picsart, I just paid a full subscription to use it. The image selection took me more time than I expected, with semantic and legal aspects I initially underestimated. You will find details on the process by the end of the book. A special thanks to one of my students and member of my research team, Naveen Mathews Renji, who passionately reviewed this book. He was also one of the pillars in the development of a virtual tutor that we created for my school, using Large Language Models (“SSE_GPT”...). This book could be a platform for dialogue, exploration, and reflection. It is rooted in my passion for technology, my often-frustrated aspiration to understand its impact on our lives, and my desire to engage readers in conversations about the future we are shaping. You will catch a glimpse of my “Roman spirit”, the skepticism of people who experienced—directly and not—so many “new” events and changes to deeply understand that there constantly is a change, an evolution, a new development. Ancient Greek wisdom said “panta rei”, “everything flows”.
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Overview of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Importance and Timeliness of the Topic . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Structure of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 2 3
2
My Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The AI Hype and the Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 How “Intelligent” is What We Call AI Today? . . . . . . . . . . . . . . . . . . . . . 2.3 Is AI/ML a “Revolution”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Information Diffusion Growth Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Sensationalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 7 11 14 16 16
3
AI and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 What is AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 A Brief History of Artificial Intelligence and Machine Learning . . . . . 3.3 The Mechanic of AI—Cognition and Intelligence . . . . . . . . . . . . . . . . . . 3.4 AI as Science and Technology + Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 The Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 The Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 The Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Brain and Mind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 19 20 24 25 25 25 26 27
4
Science, Technology and Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 The Golden Triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 A “Datafied” Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 An Evolving Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31 31 33 34
5
AI and Society Today: Friend or Foe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Is AI Today a Game Changer for the Society? . . . . . . . . . . . . . . . . . . . . . 5.2 Where is the Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Case Study 1: The Friendly AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37 37 40 41
xi
xii
Contents
5.4 5.5
Case Study 2: The Unfriendly AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fact Check 1: “Smart” Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43 44
6
The Impact on People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Job Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Job Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Economic Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 55 58 59
7
Regulating AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Privacy and Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 AI and Government . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Bias and Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Explainable AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63 63 65 66 68
8
Impacts on Specific Industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Healthcare and Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Leveraging AI/ML in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Potential Negative Impacts of AI/ML in Healthcare . . . . . . . . . 8.2.4 Case Study 3: AI/ML in Healthcare . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 Fact Check 2: AI in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Transportation and Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Leveraging AI/ML in Transportation . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Potential Negative Impacts of AI/ML in Transportation . . . . . . 8.3.4 Case Study 4: AI/ML in Transportation . . . . . . . . . . . . . . . . . . . 8.3.5 Fact Check 3: AI in Transportation . . . . . . . . . . . . . . . . . . . . . . . 8.4 Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Finance and Technology—Fintech . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Leveraging AI/ML in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Potential Negative Impacts of AI/ML in Finance . . . . . . . . . . . 8.4.4 Case Study 5: AI/ML in Finance . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Fact Check 4: AI in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Education and Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Leveraging AI/ML in Education . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Case Study 6: AI/ML in Education . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Fact Check 5: AI in Education . . . . . . . . . . . . . . . . . . . . . . . . . . .
71 71 72 72 72 73 74 75 79 79 80 83 84 85 89 89 89 92 93 94 96 96 98 100 101
Contents
9
xiii
The Horizon for AI in Our Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Technology Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Distributed Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Neuromorphic Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Case Study 7: Using New Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Fact Check 6: Using New Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . .
105 106 108 108 110 110 111
10 Application Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 AI for Improving Global Living Conditions . . . . . . . . . . . . . . . . . . . . . . . 10.3 AI for Cost Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 AI for Better Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 AI for Services Humans May not Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 The “Nice to Have” AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Case Study 8: Promoting and Supervising AI . . . . . . . . . . . . . . . . . . . . . . 10.8 Fact Check 8: Promoting and Supervising AI . . . . . . . . . . . . . . . . . . . . . .
117 117 118 120 121 122 123 124 126
11 Social Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Aging Population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Income Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Urbanization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Demographic Shifts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 Globalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.8 Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9 Personalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9.1 Political Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129 129 130 130 131 131 132 132 132 133 133
12 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 It is not Only About Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Now What . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 I’m a Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 I’m a Content Creator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.3 I’m a Professional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.4 I’m an Executive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.5 I’m an Investor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.6 I’m a Consumer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
135 135 137 139 139 140 141 141 141 142
xiv
Contents
Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143
Notes on the Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
145
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
147
List of Figures
Fig. 2.1
Fig. 2.2 Fig. 2.3 Fig. 2.4 Fig. 2.5 Fig. 3.1
Fig. 3.2
The hype. Source Wikimedia Commons; Description: A huge crowd watches from the streets as a hot-air balloon; Source Wellcome Collection gallery; Author: unknown; Date: unknown; licensed under the terms of the Creative Commons Attribution only 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NLP components. Source Author’s elaboration . . . . . . . . . . . . . . . . . . . NLP market. Source Author’s elaboration on FortuneBusinessInsights, May 2023 data . . . . . . . . . . . . . . . . . . . . . . NLP market distribution. Source Author’s elaboration on Statista Market Insights, August 2023 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brain versus artificial. Source Author’s elaboration on EDUCBA 2023 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History of AI. Source Individual pictures: Wikimedia Commons; Description: Blaise Pascal (1623–1672); Source Own work; Author: unknown; Date: circa 1690; licensed under the terms of the Creative Commons Attribution-Share Alike 3.0 Unported license—Wikimedia Commons; Description: Alan Turing (1912–1954) at Princeton University in 1936; Source https://i0. wp.com/universityarchives.princeton.edu/wp-content/uploads/ sites/41/2014/11/Turing_Card_1.jpg?ssl=1; Author: Anonymous; Date: 1936; Public domain—Wikimedia Commons; Description: An early prototype of Watson; Source Own work; Author: Clockready; Date: 21 July 2011; licensed under the terms of the Creative Commons Attribution-Share Alike 3.0 Unported license—Wikimedia Commons; Description: Siri iPad; Source: Own work; Author: eatthepieface; Date: 25 October 2021; licensed under the Creative Commons Attribution-Share Alike 4.0 International license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cost of computing. Source Author’s elaboration on Deloitte University Press data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8 9 9 10 14
21 26 xv
xvi
Fig. 3.3 Fig. 3.4
Fig. 4.1
Fig. 4.2
Fig. 5.1
Fig. 5.2
Fig. 8.1 Fig. 8.2
Fig. 8.3 Fig. 8.4 Fig. 8.5
List of Figures
Growth of data. Source Author’s elaboration on IDC 2022 data . . . . . Brain and mind. Source Wikimedia Commons; Description: Robert Fludd, Utriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione; Source Utriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione; Author: Robert Fludd; Date: 1619; Public domain because it was published in the US before 1928 . . . . . . . . . . . . . . . . . . . . . . . . . . . Science and technology and society. Source Wikimedia Commons; Description: The School of Athens; Source Own work; Author: Raphael (Photographer The Yorck Project); Date: 1509–1510; public domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Role of technology in society. Source Wikimedia Commons; Description: Long Waves of Social Evolution; Source Own work; Author: Myworkforwiki; Date: 24 May 2016; licensed under the Creative Commons Attribution-Share Alike 4.0 International license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hype cycles. Source Wikimedia Commons; Description: Here is a slide I made to illustrate the Gartner Hype Cycle; Source Own work; Author: Olga Tarkovskiy; Date: 3 August 2013; licensed under the terms of the Creative Commons Attribution-Share Alike 3.0 Unported license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AI in Smart City. Source Author’s elaboration on data from Herath (2022). Adoption of artificial intelligence in smart cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AI publications in healthcare. Source Author’s elaboration on PubMed data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Levels of driving automation. Source Wikimedia Commons; Description: A table summarizing SAE’s levels of driving automation for on-road vehicles; Source http://cyberlaw.stanford. edu/blog/2013/12/sae-levels-driving-automation; Author: Bryant Walker Smith; Date: 8 December 2013; licensed under the terms of the Creative Commons Attribution 3.0 Unported license . . . . . . . . . Smart FinTech. Source Author’s elaboration on Cao (2021). AI in Finance data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AI in finance. Source Author’s elaboration on Cao (2021). AI in Finance data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technology in education. Source Author’s elaboration on Grani´c (2022). Educational Technology Adoption data . . . . . . . . . . . . . . . . . . .
27
29
34
35
47
48 78
88 95 95 97
List of Figures
Fig. 8.6 Fig. 9.1 Fig. 9.2
Fig. 10.1
AI relevance in education. Source Author’s elaboration on U.S. Department of Education (2023) data . . . . . . . . . . . . . . . . . . . . . . . . . . . Papers over time. Source Author’s elaboration on data from Park (2023). Papers and patents are becoming less disruptive over time . . . Papers relevance over time. Source Author’s elaboration on data from Park (2023). Papers and patents are becoming less disruptive over time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hierarchy of needs. Source Wikimedia Commons; Description: Maslow’s hierarchy of needs; Source: https://en.wikiversity. org/wiki/File:Maslows_hierarchy.png; Author: User: Tigeralee; Date: 23 October 2015; This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license . . . . . . . . . . . .
xvii
103 106
107
123
1
Introduction
1.1
Overview of the Book
This book delves into the societal implications of artificial intelligence (AI) and machine learning (ML), providing a general and multidisciplinary examination of the challenges and opportunities presented by these technologies. The book covers a range of topics, including job displacement and economic inequality, privacy and security, bias and discrimination, and changes to the nature of work. The book begins by providing an overview of AI and ML and a brief history of these technologies. It then explores the impact of AI and ML on society today, examining both the positive and negative effects. The book also covers the impact of AI and ML on specific industries such as healthcare, finance, transportation and education. The book also explores the governance and regulation issues surrounding AI and ML. It examines the legal frameworks, ethical considerations, and governance principles that must be in place to ensure that AI and ML are developed and used responsibly and ethically. Additionally, it discusses the social implications of AI and ML, examining the impacts on society, autonomy and accountability, human creativity and decision-making. The book concludes by providing an overview of the current trends and future perspectives of AI and ML and some indication of what we could do with this technology based on our goals. Whether you are a student, researcher, professional, or simply curious about the impacts of AI and ML, this book provides insights, real-world examples and discussions to help navigate the complex landscape of these rapidly evolving technologies. It could be a guide to understanding the challenges, opportunities and implications of AI and ML in our society, providing elements for informed decision-making and responsible adoption of these transformative technologies.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_1
1
2
1.2
1 Introduction
Importance and Timeliness of the Topic
This book is a resource for anyone interested in understanding the societal implications of AI and ML, providing a general, multidisciplinary examination of the challenges and opportunities presented by these technologies. It is especially relevant for policymakers, industry leaders, academics and researchers working to understand the societal implications of AI and ML and develop strategies for responsible and ethical development and use of these technologies. Also, the book is relevant as it addresses the current and future implications of AI and ML on various industries, which can be valuable for professionals and decisionmakers in those fields. Examining the governance and regulation of AI and ML, including legal frameworks, ethical considerations, and governance principles, is also crucial in the current context, where the use of AI and ML is increasing in various sectors and society in general. In addition, AI and ML are becoming increasingly important as these technologies are becoming more prevalent in our daily lives and are expected to play an even more significant role in the future. This book provides valuable insights into the potential impact of AI and ML on various aspects of society, making it timely and relevant for understanding and addressing these issues. A lack of accurate information on disruptive technology such as AI/ML can leave people behind. Getting the knowledge needed to leverage those technologies is crucial for individuals and organizations. Proper information empowers individuals to understand the capabilities and limitations of disruptive technologies. It helps them make informed decisions, whether exploring new career paths, embracing new business models, or leveraging these technologies for personal and professional growth. By the time I was finalizing this book, actors and screen players in Hollywood entered into a strike, with the use of AI being one of the issues. A tentative agreement was reached, but the actual terms have not been releases yet. Some of the most publicized current “AI” systems are based on Machine Learning, using massive amounts of data to provide answers, to generate content based on combinations of the data they have. Feeding a similar system with all the plays of movies, TV series, and eventually books could make a system able to generate “reasonable” content. The system could not create original content, but it could stitch pieces together, assembling an original product with non-original parts. How much this reuse of parts would infringe on the rights of the screen players? There are no legal cases to set the way yet. Would the contribution of the screen players be as valuable as today in this scenario? Someone would be required to review the results—at least at the beginning—but their contribution would be highly reduced compared to writing full stories from scratch. Someone could argue that there is a lot of content out there that is far from being original and we—the recipients of the content— may not be able to notice the difference between one of those generated by a human from one generated by a bot. The agreement between the Sag-Aftra union—representing
1.3
Structure of the Book
3
the artists—and the studios is based on the permission the specific artists need to provide the studio to use their content, as well as placing guardrails to avoid AIs to use content without permission. Assuming it is completely possible to place those guardrails—what if the content or part of it is from reprints or in general from non-primary source?—, how the compensation would be calculated? The AI takes bits and pieces of content from thousands of sources and reaggregate it. There is no look-up table to trace back the micro component that has been used and the original work. The whole process will be a major boost to lawyers’ business. It is possible to create a tool to measure the semantic similarity between the newly generated content and the original one, and this could be one of the not so many ways to ensure a fair process. But the tool needs to be certified. This situation is a common trait in any new technology. The more repetitive and “ standard “ jobs are replaced by automatic solutions based on new technologies. Technological advancements have led to the automation of repetitive tasks, enabling businesses and society to increase productivity and achieve higher levels of efficiency. The steam power in the Industrial Revolution during the eighteenth and nineteenth centuries, Henry Ford’s implementation of the assembly line in the early twentieth century, the ATMs in the 1960s and 1970s. We still have people in manufacturing or banking, but their jobs have changed, moving away from those components of the job with less added value. This will likely happen again with AI/Machine Learning. But every change comes with some disruption, generating anxiety and social unrest. The more we know about these upcoming technologies, the better prepared we will be. This book is a step in that direction. It wants to provide readers with a comprehensive and accessible resource that demystifies these technologies, explores their societal impacts and provides individuals with the knowledge and understanding they need to navigate the changing technological landscape of AI and ML.
1.3
Structure of the Book
The book covers various topics and provides a multidisciplinary approach, examining the challenges and opportunities presented by AI and ML from multiple perspectives. The book also offers an examination of governance and regulation issues, ethical considerations, and recommendations for further research and action. With this structure, the book examines the societal implications of AI and ML, providing insights for policymakers, industry leaders, academics and researchers. Writing a non-technical book on a technical topic like AI/ML is always challenging. On one hand, the author wants to keep the scientific rigor required by the subject. On the other hand, going into scientific details may discourage the target non-technical readers. Many technical books aimed at non-experts end up gathering dust on shelves. They’re either too simplistic for those with technical know-how or too dense for the general reader.
4
1 Introduction
Coming from an academic background, most of my writing is for scholarly journals, so switching gears for this project was challenging but crucial. Part of the book is an introduction to AI/Machine Learning: what it is, what it could do, what it cannot do, and my view of the current status and trends. I then drill down on the impact of this technology on specific areas. I first provide an analysis of the possible applications to the particular domain. I describe what the specific problems are and how AI can help. For most of them, I create a fictional case study, where I imagine the particular use of AI by fictional individuals or entities. In the closing sections, I put on my researcher hat and analyze these case studies to determine how feasible they could be in the near. The conclusions also contain some insights on some relevant research currently ongoing. In detail by chapter: 1. Introduction: The book starts with an introduction that provides an overview, the importance and timeliness of the topic and the structure of the book. 2. My perspective: In this chapter I provide my bird’s-eye perspective of the topic. Is this a real revolution? How intelligent are the current systems? Those are some of the questions I address here. 3. AI and Machine Learning: The third chapter provides a clear introduction to the topic by defining AI and providing a brief history and the mechanics of AI. 4. Science, Technology and Society: The fourth chapter provides a broader perspective on the relationship between science and technology and society and the current and future impact of AI and ML. 5. AI and society today: friend or foe: The fifth chapter examines the current impact of AI on society and the potential challenges and opportunities presented by AI and ML. 6. The impact on people: The sixth chapter focuses on the impact of AI and ML on employment and wages, the impact on different groups of workers, challenges of retraining and reskilling and income inequality. 7. Regulating AI: The seventh chapter examines the regulation frameworks for AI and ML, the role of public and private actors in shaping the development and use of AI and ML, international governance and regulation of AI and ML, self-regulation and industry standards, and ethical considerations and governance principles for AI and ML. 8. Impact of AI and ML on specific industries. This chapter examines the impact of AI and ML on selected industries, highlighting how these technologies are transforming these fields and the implications for workers and consumers. It focuses primarily on Healthcare, Finance, Transportation and Education, providing insights into the challenges and opportunities presented by AI and ML in these industries and highlighting the need for responsible and ethical governance in these fields.
1.3
Structure of the Book
5
9. The horizon for AI in our society. This chapter is focused on some of the emerging technologies in AI/ML. It also contains a fictional description of thee use of those technologies in the life of a fictional character. 10. Application trends. This chapter provides a broader perspective on the societal implications of AI and ML, examining the impact on human society, autonomy and accountability, and human creativity and decision-making. It also highlights the importance of responsible and ethical governance of AI and ML. It provides insights into the ethical considerations that must be considered when developing and using these technologies. 11. Social trends. This chapter provides an overview of the current trends and future perspectives of AI and ML and the social perspective. It helps readers understand the potential impact of these technologies on society and the challenges and opportunities presented by AI and ML. It focuses on the technology, applications and how society could change because of them. 12. Conclusion: This last chapter summarizes the book’s main findings and implications and provides further research and action recommendations. It highlights the importance of responsible and ethical governance of AI and ML. It provides insights into future research directions and open questions that need to be addressed to understand the societal implications of AI and ML.
2
My Perspective
I started working in AI in the mid-80s, with an academic background in math. I then worked in technology-related industries for 20+ years before returning to Academia to get my Ph.D. and start a second career as a researcher and educator. For the last four years, I taught data science, AI/ML and Natural Language Processing to more than 500 students and managed projects as PI for about $6M, all in AI/ML/Natural Language Processing. My focus is on understanding the impact of these technologies on society and developing strategies for practical but responsible and ethical development and use of these technologies. Some elements in my perspective of the whole AI in society analysis will define my view of the topic in this book.
2.1
The AI Hype and the Market
There is an increasing interest and curiosity around “Artificial Intelligence”. After ChatGPT was announced in November 2022, there has been a further acceleration. Most general news and articles have removed the lines between AI, Natural Language Processing (NLP) and Machine Learning (ML). According to Google Trends, there have been over 10 million Google searches for ChatGPT since it was released in November 2022. Artificial Intelligence is an umbrella term that defines systems with behavior that could be compared to humans in a specific domain. When the domain is “everything,” we talk about Artificial General Intelligence (AGI), meaning a system with fully human
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_2
7
8
2 My Perspective
behavior (and this may lead to a lot of other discussions). Machine Learning is a subset of AI leveraging data to build models using algorithms to allow the system to perform tasks without being explicitly programmed for that task. They correlate the task with possible behaviors and provide the best behavior that fits the request (Fig. 2.1). NLP is a subset of AI focused on the interaction of computers and human language. It requires algorithms and models to let the computer process the language and provide proper feedback. NLP can be done with or without Machine Learning. NLP can be focused on extracting information from text or being a part of a more extensive system, where the language is the interface between the system and the human using it (Fig. 2.2). ChatGPT is a “know-it-all” system, where the user can place a question, the system gets the elements for a search from the query, matches those elements with the data it has and provide the best matching data in a conversational way. The potential value of a system using NLP is very high: humans can get the processing they need, the action required, by “talking” to the system.
Fig. 2.1 The hype. Source Wikimedia Commons; Description: A huge crowd watches from the streets as a hot-air balloon;Source Wellcome Collection gallery; Author: unknown; Date: unknown; licensed under the terms of the Creative Commons Attribution only 4.0
2.1 The AI Hype and the Market
9
Computer Science NLP AI
Human Language
ML
Fig. 2.2 NLP components. Source Author’s elaboration
USD billion
The market for NLP had an estimated value of about $20 billion in 2022 and a forecasted value of more than $112 billion in 2030 (source: FortuneBusinessInsights, May 2023). As in the chart, a huge jump from 2023 to 2030 is in the forecasts. Most of it is somehow related to the acceleration NLP is getting after the OpenAI introduction of ChatGPT. It has been a point of discontinuity. Many of us see it more as a sophisticated statistical tool, but nevertheless, it’s giving a major spin to the sector (Fig. 2.3).
112.28 19.68 2022
24.1 2023 2030
Fig. 2.3 NLP market. Source Author’s elaboration on FortuneBusinessInsights, May 2023 data
10
2 My Perspective
NLP Worldwide Market (%)
Russia Middle East China North America Asia Europe Pacific South America Africa
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
Fig. 2.4 NLP market distribution. Source Author’s elaboration on Statista Market Insights, August 2023 data
Sector that is still seeing North America as the major player, followed by Europe and China, as in the chart (source: Statista Market Insights, August 2023). I analyzed with my team official documents released by the three governments about their approaches to NLP and AI in general. The focus they have is reflecting their overall approach to business: USA is focused on innovation and research, Europe on standards, rules and regulation. China on leading the technology, apply it to industry, position themselves as a leader in the global economy leveraging on AI/NLP (Fig. 2.4).
2.2
2.2
How “Intelligent” is What We Call AI Today?
11
How “Intelligent” is What We Call AI Today?
Source Base: Wikimedia Commons; Description: Replica of the character Maria from Metropolis; Source Maria from the film Metropolis, on display at the Robot Hall of Fame; Author: Jiuguang Wang from Pittsburgh, Pennsylvania, United States; Date: Taken on 8 March 2011, 15:07; licensed under the terms of the cc-by-2.0 Edited using Picsart
First, what we call “AI” has little to do with what “intelligence” should be considered. One of the most advanced systems with general availability we have at the time of this book is ChatGPT, which is based on a system called GPT (Generative Pre-trained Transformer), currently at version 4 (“GPT-4”). This is an excellent piece of technology that I use in my research, teaching and writing as a sidekick, bouncing back ideas and getting a “blob” on a topic. To me, this is a plain English compiled version of a search
12
2 My Perspective
engine, with some attention to the context of my query and the limitation of the data it uses (currently up to 2021). Instead of checking individual websites—as I would do with Google—I have a compilation of the actual response from—likely—the same websites. I’m losing the references, though, making it hard to use for academic papers. It is a statistical system sitting on a massive pile of data matching patterns in the data with my query. The result may be fantastic, but it still is a pattern discovery and matching system. The common sense emerges from the data, there is no “creativity”, there is a bias due to the data it uses, “learning” is from using more data. There is also the bias induced by the people—real human beings—labeling and “pruning” the data. The “pruning” is for eliminating either redundant or unappropriate content, but this is intrinsically a biasprone process. Those human pruners and labellers—paid less than $2/h—have some of the worst jobs in the world, reading and eliminating toxic content describing situations in graphic details. But this is not different from what leading social media—like Meta (Facebook)—is doing. The Large Language Models (LLM)—like ChatGPT—are brute force systems inefficiently crunching numbers. The energy consumption of training a large language model can be equivalent to that of several households over several months. The energy consumption for training ChatGPT or models like the one used for it has not been released, but it would have required 1,024 GPUs running for more than 30 days. The cost would have been $4.6 million, with an estimated energy usage of 936 MWh. This energy is enough to power 30,632 American households for one day, or 97,396 average European households for one day (thanks to Kashif Raza, in my LinkedIn network!). Let’s compare this to our brain. The brain consumes about 15 watts to run. This is about 20% of the total body consumption (ThePhysicsFactbook, n.d.). According to the U.S. Energy Information Administration, the average energy consumption per person per year in the United States is around 11,000 kilowatt-hours (kWh) (Administration, 2022). Giving an average lifespan of 80 years, this would provide an estimate of 880 MWh of energy consumption per lifetime. With about 20% of the brain’s total body consumption, the brain uses 176 MWh. This is less than 20% of the energy required just for training an LLM. The energy for training an LLM is far more than the energy needed for non-training operations. The brain uses less energy, but how can we compare the two? The human brain has an estimated number of neurons in the 80–100B range with an average of 7000 synaptic connections each, a total of 100–150 trillion connections. Our brain performs many functions often associated with specific areas of the organ. Only some areas seem to be primarily dedicated to language modeling. The number of neurons in those areas is estimated in the range of 400–700M. It is unlikely that there is no synergy between the different areas of the brain, all being part of a broader knowledge management system that we do not fully understand yet.
2.2
How “Intelligent” is What We Call AI Today?
13
With GPT-4 just announced, let’s focus on the previous version. GPT-3 has 175B parameters. Models like GPT are based on what is called “Artificial Neural Networks” (ANN). An Artificial Neural Network is a type of computer system inspired by the structure and function of the human brain. It comprises interconnected “neurons,” which process and transmit information through the network. "Neurons" are mathematical functions taking multiple weighted inputs from other "neurons" and generating input for other "neurons". The parameters represent the weights of the connections between the neurons in the neural network. These parameters are learned during training by adjusting them to minimize the difference between the predicted and actual output. Calculating the human brain’s “parameters” is not a straightforward operation. We have 80-100B neurons with a total of 60–100 trillion synaptic connections. We are using those connections as "parameters". The 60 trillion parameters would be about 300 × more parameters than GPT-3 (175B). Assuming that about 10% of the human brain capacity is needed for natural language tasks, the human brain has about 30 × more parameters than GPT-3. Not a gap that cannot be closed by the coming GPT. But this is only a part of the story. Our brain is not a set of separate specialized and isolated areas. We use five senses—with their related specialized brain areas—to create knowledge. We then use the part dedicated to the language to express the knowledge. We remember things by their visual appearance, smell, sound, taste and touch. All those inputs become "knowledge" that we communicate, for example, via language. We also express knowledge in different ways, like visual artists using images, musicians with sound, chefs with taste or perfumers with smell. The bottom line, our brain is a more complex and efficient system than current LLMs. Why so? What is missing in current AI? What is missing is an efficient and effective model to represent the knowledge and its use. Again, what is used in the current leading LLMs is a statistic on steroid method based on brute force on a massive amount of data. Intelligence is knowing and thinking and current LLMs are just matching our requests with existing intrinsically biased data, but a lot of it, making them of great help in many areas. Just like many other cases in the past, the name AI sells well, regardless of its actual serving the purpose of being “intelligent”. CNN mentioned the case of the announcement by BuzzFeed about their use of AI to create content, which sent their stock up by 150% (CNN, 2023). I remember the days of the dot-com bubble in late 1990. Wikipedia cites other technology-inspired booms, like railroads in the 1840s, automobiles in the early twentieth century, radio in the 1920s, television in the 1940s, transistor electronics in the 1950s, computer time-sharing in the 1960s, and home computers and biotechnology in the 1980. Does it make sense to call what we commonly call AI, as “AI”? For the rest of the book, I’m assuming the current level of AI/ML, also giving my view on what the next generation of more “intelligent” systems can likely do (Fig. 2.5).
14
2 My Perspective
Weight
Size
Process Speed
Energy Usage
~3 lbs
~80 cubic inches
Up to 1M trillion operation/second
20 Watts
~150 tons
over 4,000 sq. ft.
Up to 95K trillion operation/second
10M Watts
Fig. 2.5 Brain versus artificial. Source Author’s elaboration on EDUCBA 2023 data
2.3
Is AI/ML a “Revolution”?
Second, is a realistic evaluation of the impact of AI/ML on society: is this a “revolution”? There is no doubt that systems like ChatGPT or similar LLMs, semi-autonomous vehicles, advanced robotics, predictive systems, and advanced gaming—to name only some—are impacting our daily activities, but is all of this a “revolution”? As mentioned before, society had several technology-driven or industrial “revolutions”. The first in the modern era is the (first) Industrial Revolution in the nineteenth century, then the second industrial revolution (the “technology revolution”)—generally associated with the period 1870 to 1914—and the third industrial one, the Digital Revolution, started in the late twentieth century and leading to now. The first industrial revolution significantly changed how goods were produced and distributed. It marked the transition from manual labor to machine-based manufacturing and led to the development of new technologies, such as the steam engine and the power loom. This revolution induced significant changes in how people lived and worked, including the growth of factories, the rise of industrial cities—and in general, increased urbanization— and the emergence of a new working class. It also led to significant economic and social changes, including increased productivity, greater wealth, and improved living standards for many people. The Second Industrial Revolution, also known as the Technological Revolution, was a period of rapid industrial development from the late nineteenth century to the early twentieth century. The introduction of new technologies and innovations in areas such as steel production, transportation and communication marked it. One of the key innovations of the Second Industrial Revolution was the widespread use of electricity. This allowed for the development of new machines and processes, such as electric motors and power generators, which significantly increased factory productivity and efficiency. Additionally, the widespread use of electricity enabled the development
2.3
Is AI/ML a “Revolution”?
15
of new forms of transportation, such as electric trolleys and trains, which significantly improved transportation and communication. The Second Industrial Revolution saw the introduction of new materials and techniques in manufacturing, allowing the mass production of goods at a much lower cost, making them more accessible to the general public. The Second Industrial Revolution also significantly impacted the economy, leading to the growth of large-scale industries and new jobs in manufacturing and transportation. It also led to the displacement of workers and the rise of industrial slums in cities. This revolution saw the emergence of new technologies such as the internal combustion engine, the telephone, and the electric power grid. This led to a further increase in productivity and a shift towards mass production and consumer goods. The development of new transportation and communication technologies also made it easier for goods and people to move around, further facilitating trade and commerce. This period also saw the rise of new business models, such as the corporation, which led to the concentration of economic power in the hands of a few large companies. It also led to increased competition between countries and the emergence of new economic powers, such as the United States and Germany. These economic and social changes contributed to the rise of nationalism and imperialistic ideologies, which ultimately led to the outbreak of the World War I. The third industrial revolution or digital revolution (the “information age”) refers to the rapid technological advancements that have taken place in the field of electronics and computer technology. This revolution has led to the development of computers, the internet, and digital communication technologies, fundamentally changing how people live, work, and communicate. The digital revolution has led to new software, internet, and e-commerce industries. It has also enabled the automation of many tasks and the digitization of information. It has also led to new forms of communication and entertainment, such as social media and streaming platforms. The three revolutions had a profound effect on society, with each one leading to significant changes in the way we live, work, and interact with one another. There have also been negative consequences, such as increased economic inequality and environmental degradation, that have resulted from these changes. Is this AI/ML revolution—often called the “4th Industrial Revolution”—a real revolution compared to the other game changers?
16
2.4
2 My Perspective
Information Diffusion Growth Rate
We live in a hyperconnected world. I can get the news from almost every corner of the world seconds after they happen. Every whisper travel thousands of miles. The song “Hello” by Adele—the British singer—sold 1.1 million copies in the U.S. in its first week in 2015 (Guardian, Adele’s new single breaks record for first week download sales, 2015), which set a new record for the most significant one-week sales in digital history. It was also the most significant one-week sales by a female artist in the U.S. It had a record-smashing 27 million Vevo views in its first 24 h and the new single-day U.S. streaming record, with 2.3 million streams. Those numbers happened very few times in the history of music, rarely or never at this speed. It took five days for ChatGPT to reach the mark of one million users (Yahoo, 2022). It took five months for Spotify to get the same number, Netflix 3.5 years, the television in the US 15 years, telephone 25 years. They didn’t have the same impact on our society, though. The hyper connection is potentially boosting any message grabbing people’s attention. “Baby Shark Dance” by Pinkfong—one of the most popular videos on YouTube—has over 11 billion views. It is a children’s song and dance video featuring an animated shark family. A ragdoll cat named Puff from New York City has over 7 billion views on his YouTube channel.
2.5
Sensationalism
Sensationalism is news reporting or content that emphasizes a story’s most unusual, shocking or attention-grabbing elements, often at the expense of accuracy or context. Sensationalism has been around for centuries, but it has become more prevalent in recent years due to the increased competition for audience attention and the rise of social media, which can amplify sensationalistic stories and make them go viral in our hyperconnected world. It sells well. It taps into basic human emotions and instincts. People are naturally drawn to information relevant to their survival and well-being, and sensationalism often presents information in a way that makes it seem more relevant and urgent. It leverages people’s natural curiosity and desire for novelty and excitement. Sensational headlines and stories are often designed to evoke strong emotions, such as fear, anger, or awe, which can be more memorable and impactful than mundane information. Sensationalism can also create a feeling of social validation, as people may need to keep up with the latest news and trends to be part of the conversation.
2.5
Sensationalism
17
It generates more clicks, views, and engagement on social media and other digital platforms, which can increase visibility, boost traffic, and drive revenue. It is used by media outlets to boost their ratings and attract advertisers. Sensationalism can be agnostic to the content, meaning it can be applied to a wide range of topics and genres, but it’s not limited to a specific topic. It can be used for any content as long as it can grab the public’s attention and generate engagement. This content could have or not have a real merit. It may have a proper foundation and then be over-inflated. The question is: is the hype on AI/ML part of sensationalism boosted by the high rate of information diffusion we have now?
3
AI and Machine Learning
3.1
What is AI
Source Base: Wikimedia Commons; Description: Sculpture in the Musée Rodin, Paris, France; Source: « Le Penseur » vu sur son côté droit; Author: couscouschocolat from Issy-Les-Moulineaux, France; Date: 29 October 2011, 12:51; licensed under the terms of the cc-by-2.0 Edited using Picsart
AI stands for “artificial intelligence”, which means we should have a way, or a model to represent the “natural intelligence” and then apply methods, algorithms and computer programs to make the model usable.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_3
19
20
3 AI and Machine Learning
In Latin, “intellect” is from “inter” and “ligere”, which is “link together”. When we reason, we connect concepts/memories to address the issue of the moment. Intelligence is the ability to connect elements stored in our brains to serve a given purpose. You can have a lot of elements/memories in your brain but not be able to use them when needed. Vice versa, you can connect memories quickly and efficiently, but you don’t have many memories/experiences. It would be easy to see the two types represented by a wise older person and a smart youngster or by an overthinker and a forgetful high-IQ individual. The theoretical discipline studying intelligence is called “Intelligence Theory”. It is a branch of psychology that focuses on the study of intelligence. It is focused on understanding the nature of intelligence, how it arises, and how it can be measured. It is an interdisciplinary field with elements from various disciplines, such as cognitive psychology, cognitive science, neuroscience, and philosophy. Reality is that we cannot define and measure “natural intelligence”. One of the most widely used metrics in intelligence theory is the intelligence quotient (IQ) test. IQ is calculated by dividing a person’s mental age (as determined by an IQ test) by their chronological age and multiplying the result by 100. The reliability of IQ is a topic of debate among researchers and experts in the field. One of the main criticisms of IQ tests is that they may not accurately measure all aspects of intelligence. Intelligence is a complex and multi-faceted construct, and IQ tests typically only measure a narrow range of cognitive abilities, such as verbal and mathematical reasoning. They do not consider other important aspects, such as creativity, emotional intelligence, and practical intelligence. Also, IQ tests may not be culturally or socioeconomically neutral. Some researchers argue that IQ tests are culturally biased, as they tend to favor people from specific cultural backgrounds and socioeconomic groups. This can result in a lack of fairness and accuracy in the measure of intelligence in people from different backgrounds. Another criticism is that IQ scores are not stable over time. They may change depending on the individual’s life experiences, opportunities and environment. Lord Kelvin in the nineteenth century (allegedly) said, “To measure is to know” that means we do not know what intelligence is, not being able to measure it. The bottom line, we do not “know” what intelligence is. Still, we label as “intelligent” those behaviors that most people consider “intelligent”, giving to this word the meaning of “being comparable with the behavior a human would have”.
3.2
A Brief History of Artificial Intelligence and Machine Learning
Why are we talking about AI now? AI has been around for hundreds of years, always being the next big thing.
3.2
A Brief History of Artificial Intelligence and Machine Learning
21
Pretty much every culture had some say on artificial intelligence. In ancient times, Cadmus and the dragon teeth turned into soldiers; Pygmalion and Galatea; Hephaestus and the mechanical servants; Yan Shi and his human-like robot for King Mu of Zhou; Al-Jazari and his automata. To trace back AI, we should use early examples of automatons and mechanical devices that mimic human behavior. In Greek mythology, for example, the god Hephaestus was said to have created automatons to help him in his workshop. Similarly, there were also examples of automated devices in ancient Greek and Chinese cultures, such as the Antikythera mechanism and the Chinese South Pointing Chariot. Many consider the beginning of modern AI the mechanical calculating machine by Blaise Pascal, 1642. The following figure provides a timeline of Artificial Intelligence (Fig. 3.1). Edited and placed in storyline by the author Was the first programmable machine in 1837 by Ada Lovelace and Al Babbage an “artificial intelligence”? Was the first automated booking system by America Airlines in 1946 an example of AI? Some of the users at that time thought so: “The computer said …” … and it must be right…
Fig. 3.1 History of AI. Source Individual pictures: Wikimedia Commons; Description: Blaise Pascal (1623–1672); Source Own work; Author: unknown; Date: circa 1690; licensed under the terms of the Creative Commons Attribution-Share Alike 3.0 Unported license—Wikimedia Commons; Description: Alan Turing (1912–1954) at Princeton University in 1936; Source https://i0.wp.com/ universityarchives.princeton.edu/wp-content/uploads/sites/41/2014/11/Turing_Card_1.jpg?ssl=1; Author: Anonymous; Date: 1936; Public domain—Wikimedia Commons; Description: An early prototype of Watson; Source Own work; Author: Clockready; Date: 21 July 2011; licensed under the terms of the Creative Commons Attribution-Share Alike 3.0 Unported license—Wikimedia Commons; Description: Siri iPad; Source: Own work; Author: eatthepieface; Date: 25 October 2021; licensed under the Creative Commons Attribution-Share Alike 4.0 International license
22
3 AI and Machine Learning
The modern era of AI is commonly set to begin in the 1950s, with the Dartmouth Conference, which marked the birth of AI as a field of study. The Dartmouth Conference was held in the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The conference was organized by a group of researchers, including John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, who were interested in exploring the possibility of creating machines that could mimic human intelligence. The researchers discussed various AI-related topics during the conference, such as natural language processing, problem-solving, and learning. They also proposed a research program to create “thinking” machines, which they believed could be achieved by coding rules and procedures for the machine to follow. This approach to AI, known as “good old-fashioned AI” (GOFAI), became the dominant paradigm for AI research for the next several decades. The Dartmouth Conference helped to establish AI as a distinct field of study, separate from other fields such as mathematics, engineering, and cognitive psychology. This helped attract resources and support for AI research, forming AI research groups and departments at universities and research institutions. The Dartmouth Conference’s proposal of a research program was based on the assumption that AI could be achieved by properly coding rules and procedures. This approach is generally called “symbolic reasoning” and is “top-down”: you need to set the rules of the game to make the game work. What if you do not have all the rules? What if you have so many rules you cannot be sure there is consistency across rules? Symbolic reasoning, also known as “good old-fashioned AI” (GOFAI) or rule-based AI, is an approach to AI that involves representing knowledge in a symbolic form and using logical reasoning to make inferences and decisions. This approach uses a set of rules and procedures that are explicitly programmed into the system, and the system can only perform the tasks it has been specifically programmed to do. The system’s performance is based on the quality of the rules and knowledge encoded in it. Symbolic reasoning is useful for tasks that require a high degree of logical reasoning, such as rule-based systems (“expert systems”), some natural language processing, and planning. A system based on symbolic reasoning comprises two main elements: the symbolic representation of the knowledge (for example, a set of rules) and a “smart” crawler applying the proper elements of the representations to the specific request. The core of the system is the symbolic representation. In the late 1980s and early 1990s, a new approach to AI called “machine learning” emerged, which uses statistical techniques to enable computers to learn from data rather than relying on explicit rules and procedures. Machine learning is an approach to AI that allows systems to learn from data rather than relying on explicit rules and procedures. Machine learning uses statistical techniques to enable computers to learn from data to improve their performance over time. The system’s performance improves as it is exposed to more data. The quality of a Machine Learning system is directly correlated to the
3.2
A Brief History of Artificial Intelligence and Machine Learning
23
quantity and representativeness (“quality”) of the data used to train the system. Machine learning is helpful for tasks that involve recognizing patterns and making predictions, such as image recognition, speech recognition, and natural language processing. A system based on Machine Learning comprises two main elements: the data representing the knowledge (for example, large sets of text) and algorithms to detect patterns in the data matching as closely as possible to the specific request. The core of the system is the data. Smart algorithms and insufficient or inaccurate data make dumb systems. The learning method used by Machine Learning is “bottom-up”, being based on extracting behaviors from data and examples. But what if you do not have enough data to extract appropriate behaviors? What if the data provides a biased or partial vision of the problem? What if the data is not representative of the knowledge of the domain you are focused on? In short, symbolic reasoning is based on explicitly programmed rules, while machine learning is based on “learning” from data. Symbolic reasoning is helpful for tasks that require a high degree of logical reasoning, while machine learning is helpful for tasks that involve recognizing patterns and making predictions. All of this has roots in philosophy, particularly in the branch of philosophy called epistemology, or the theory of knowledge. Two of the leading schools of thought in epistemology are Rationalism and Empiricism. Rationalism “regards reason as the chief source and test of knowledge” (Britannica), meaning reality has an intrinsically logical structure. René Descartes, Baruch Spinoza and Gottfried Leibniz were some of the rationalists. Empiricism is based on the concept that knowledge comes only or primarily from sensory experience. Aristotle, John Locke and David Hume were some of the empiricists. Fast forwarding some centuries, in the context of AI, symbolic reasoning is a modern form of rationalism, and machine learning a form of empiricism. Rationalism is a philosophical perspective that emphasizes the role of reason and logic in understanding the world. In the context of AI, symbolic reasoning can be considered a form of rationalism because it uses logical reasoning to make inferences and decisions. The system’s performance is based on the quality of the rules and knowledge encoded in it, and it relies on the explicit programming of rules and procedures. On the other hand, empiricism is a philosophical perspective that emphasizes the role of observation and experience in understanding the world. In AI, machine learning can be considered a form of empiricism because it allows systems to learn from data. The system’s performance improves as it is exposed to more data and relies on statistical data analysis to make predictions and improve its performance. The distinction between rationalism and empiricism is a complex topic that philosophers have debated for centuries. In the context of AI, the distinction between symbolic reasoning and machine learning is also not absolute, and many AI techniques combine elements of both approaches. This intersection seems to be the most promising area of AI:
24
3 AI and Machine Learning
systems based on cognitive models (“symbolic” approach) leveraging on data (“machine learning” approach). We’ll come back to this. The take from this paragraph. AI is not one of the latest innovations in science. Some forms of AI are dated a few centuries ago, even if it was elevated to self-standing discipline in the mid-fifties. The roots of AI are in the representation of knowledge, a topic philosophers have debated for centuries.
3.3
The Mechanic of AI—Cognition and Intelligence
To have an “intelligent” behavior, a system should be able to create mental elements— as a result of external stimuli—and use them in a “proper” way to address the issue of the moment. If you do not have elements to “ligere” you cannot do the “inter-ligere”, meaning having intelligent behaviors. The first task is in the realm of cognition and the second is in intelligence. Cognition can be considered the ability to create mental elements and intelligence is the ability to use them. Cognition refers to the mental processes and activities involved in acquiring, processing, storing, and using information, such as perception, attention, memory, problemsolving, decision-making, and language. These processes allow individuals to create mental elements like concepts, ideas, and world representations. On the other hand, intelligence can be considered the ability to use these mental elements effectively and adaptively. It refers to a general ability to learn, reason, and solve problems. This ability to use mental elements flexibly and adaptively allows individuals to navigate their environment, make decisions, and learn from experience. The problem is that none of those two components is deterministic, where determinism is the principle that everything that happens results from prior causes. In the case of cognition, many cognitive processes are influenced by prior causes such as genetics, environment, and experiences. However, it is also acknowledged that some cognitive processes allow for flexibility, adaptability and free will, which are not entirely determined by prior causes. Also, the context, the individual’s state of mind, mood, and emotions also play a role in cognition, and they can influence the cognitive processes in a non-deterministic way. Intelligence follows pretty much the same pattern as cognition. Some prior causes could influence it. Still, many are not, such as problem-solving, decision-making, and creativity, involve more flexibility and adaptability, and allow for more freedom of choice. Just as cognition, an individual’s state of mind, mood, and emotions also play a role in intelligence.
3.4
AI as Science and Technology + Data
3.4
25
AI as Science and Technology + Data
When I started working in AI in the mid-eighties, we were already using most of the methods and algorithms that we are using now. The underlying math was the same: linear algebra, graph theory, for example. What do we have that is “more” today?
3.4.1
The Science
Algorithms are the root science of AI. With minimal exceptions, all the new algorithms are around more “powerful” versions of what is called “artificial neural networks” (ANN). Artificial Neural Networks are a type of machine learning algorithm that is modeled after the structure and function of the human brain. They comprise interconnected nodes, called artificial neurons, that process information and make decisions. ANNs are used to analyze patterns and make predictions from large sets of data. ANNs are an interconnected web of decision-making units, mimicking how the brain comprises interconnected neurons. These interconnected units, or neurons, take inputs, process them, and then output the result. The neurons are organized in layers, with the input layer receiving the raw data, the hidden layers processing the data, and the output layer providing the result. The connections between the neurons, called edges, have a weight associated with them, which determines the strength of the connection. The weights are adjusted during the training process to optimize the network’s performance. The first artificial neural network (ANN) was proposed in 1943, but the first practical ANNs, called perceptrons, were developed in the 1950s and 1960s by Frank Rosenblatt at the Cornell Aeronautical Laboratory (Rosenblatt, 1958). The early perceptrons had several limitations, and research on ANNs declined in the 1970s. In the 1980s ANNs saw renewed interest and advancements due to the advent of more powerful computers and the development of methods to train the network to perform tasks. This significant improvement in computing power is one of the two primary fuel sources for the progress we can see today in the reach and depth of AI.
3.4.2
The Technology
Computer Science technology is a primary fuel for the growth of AI we are seeing today. Let’s use 3 of its most common metrics. One is the speed of a computer’s central processing unit (CPU), measured in hertz (Hz). In the early 1980s, the CPU speed of a typical personal computer was around 8 MHz (8 million hertz). Today, the CPU speed of a standard personal computer is about 3–4 GHz (3–4 billion hertz).
26
3 AI and Machine Learning
$0.14
$0.25
$0.35
$0.46
$0.68
$0.57
1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
$0.03
$0.78
$1.00
$0.89
$5.50
$3.25
$7.75
$77.50
1996 1997
$32.50
19941995
$10.00
1 9 9 21 9 9 3
$55.00
$100.00
$256.33
$412.67
$569.00
$ PER GIGABYTE (GB)
2012
Fig. 3.2 Cost of computing. Source Author’s elaboration on Deloitte University Press data
Another metric is the ability to perform floating-point operations per second (FLOPS), measuring a computer’s computational power. In the early 1980s, a typical personal computer had a FLOPS of around 100–1000 GFLOPS (billion FLOPS). Today, the most powerful supercomputers can perform over a hundred quadrillion FLOPS (petaflops). The third metric is related to computer memory. The cost per byte of memory has decreased by around 10 million, and the price per unit of storage capacity has reduced by about 100,000 (Fig. 3.2).
3.4.3
The Data
We learn from experience; machines “learn” from data. While “symbolic” AI relies on symbolic representation of knowledge to perform tasks, Machine Learning uses data to find patterns matching the best way possible the request received. Machine Learning systems have two essential components: the data to be used for pattern recognition and the algorithms to detect patterns in the data and match them with the request in the task. Most of what we do today has at least a digital layer. The digital common denominator of our lives is the Internet and its communication protocol. While January 1, 1983 is considered the official birthday of the Internet (Channel, 2019), the beginning of the Web as a publicly available service was on August 6, 1991 (Channel, The World’s First Web Site, 2018), when Berners-Lee published the first-ever website. Berners-Lee was born in London, studied physics at Oxford and worked at CERN in the 1980s when he observed how tough it was to keep track of the projects and computer systems of the organization’s thousands of researchers, who were spread around the globe. He worked with Robert Cailliau, a Belgian engineer at CERN, to refine the idea of a hypertext, hyperconnected system. By the end of 1990, Berners-Lee, using a Steve Jobs-designed NeXT computer,
3.5
Brain and Mind
27
Fig. 3.3 Growth of data. Source Author’s elaboration on IDC 2022 data
developed the key technologies that are the foundation of the Web, such as Hypertext Markup Language (HTML) to create Web pages; Hypertext Transfer Protocol (HTTP) to transfer data across the Web; and Uniform Resource Locators (URLs), that is the “web address”, to find a document or page. In 1993, a team at the University of Illinois’ National Center for Supercomputing Applications released Mosaic, the first Web browser to become popular with the public. In December 1995, we had 16 million Internet users, which is about 0.4% of the world population. In December 2022, we had 5,544 million users, which is 68% of the world’s population (Stats, n.d.). This is feeding the growth of data, the essential component for Machine Learning. The total amount of data created, captured, copied, and consumed globally is estimated at 64.2 zettabytes in 2020. For 2025, global data creation is projected to grow to more than 180 zettabytes (Statista, 2022). In the mid-80s, we had 2.6 exabytes, that is 0.0026 zettabytes (Wikipedia, 2023). The vast majority were in private data centers, with no or limited availability for the general public. With that, there is not much Machine Learning to do (Fig. 3.3). We’ll go back to the data in one of the following chapters.
3.5
Brain and Mind
The brain is the physical organ in our skull and is responsible for controlling and coordinating all the body’s functions. It comprises billions of cells called neurons, which communicate with each other to handle everything from movement and sensation to
28
3 AI and Machine Learning
thought and emotion. The brain regulates physiological processes, including breathing, heart rate, and blood pressure. It also controls the body’s movements, senses and reflexes. On the other hand, the mind is a concept that refers to the subjective experience of being aware of one’s thoughts, feelings, and perceptions. It is the realm of consciousness and self-awareness, where we experience the world around us and make sense of it. The mind is responsible for the processes that give rise to thoughts, emotions, and perceptions, and it includes mental processes such as attention, memory, decision-making, and problem-solving. When we talk about AI, we talk about the combination of the two. In current ML, the brain is represented by artificial neural networks (ANN). ANNs are a type of machine learning algorithm that is modeled after the structure and function of the human brain. They consist of layers of interconnected nodes, called artificial neurons, designed to simulate how neurons in the brain process and transmit information. Neural networks are used in various ML applications, such as image recognition, speech recognition, and natural language processing. In the vast majority of the existing applications, ANNs are mathematical models running on a traditional computer architecture (called “von Neuman architecture”, defined in 1945). Neural networks are not the only way to represent brains. In a “symbolic” approach, brain could be represented by rules, taxonomies, or similar structures. In AI, algorithms and models that simulate cognitive processes such as perception, attention, memory, decision-making, and problem-solving often represent the mind. These algorithms and models are designed to replicate how the human mind works and are used to create intelligent systems such as expert systems, natural language processing systems, and machine learning models. Are the representations of “brains” and “minds” in the current AI systems functionally comparable with ours? And also, do “intelligence” need to be human-like to be considered “intelligent”? Animals have behaviors we call “intelligent”, but we do not consider them with the same type of intelligence human (most of the times) have. Using the current most advanced systems, we could conclude that current “AIs” may have more information than many or most of the humans, but they often fail in basic logic. Let’s define some more aspects (Fig. 3.4).
3.5
Brain and Mind
Fig. 3.4 Brain and mind. Source Wikimedia Commons; Description: Robert Fludd, Utriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione; Source Utriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione; Author: Robert Fludd; Date: 1619; Public domain because it was published in the US before 1928
29
4
Science, Technology and Society
4.1
The Golden Triangle
Source Base: Wikimedia Commons; Description: Artificial Intelligence/Visual Computing/Machine Learning Idée originale de: Hanna Mergui et Jérémy Barande Modèles: Hanna Mergui, Haiyang Jiang et Mathieu gierski Maquilleuse : Léa lechan; Source Master Artificial intelligence and visual computing Ecole polytechnique; Author: Ecole polytechnique from Paris; Date: 23 February 2023, 11:54; licensed under the terms of the cc-by-2.0 Edited using Picsart
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_4
31
32
4 Science, Technology and Society
Science and technology have played a critical role in the advancement of society. Technology improved communication, transportation, and medicine, increasing productivity and economic growth. Science improved our understanding of the natural world and led to new disease treatments and cures. Science and technology improved the human condition and led to a better quality of life worldwide. Let’s talk first about the relationship between science and technology. Science and technology are deeply intertwined. Science provides the theoretical foundations for developing new technologies, and technology provides the tools and methods for scientists to test their theories and make discoveries. Science systematically studies the natural world through observation, experimentation, and theoretical explanation. Scientists use the scientific method to gather data and make hypotheses about the natural world, and they use this information to develop theories that explain how the natural world works. On the other hand, technology is the application of scientific knowledge to design and develop tools and systems. Engineers and inventors use scientific principles to design new technologies and create practical solutions to problems. The relationship between science and technology is often described as a “feedback loop.” Scientific discoveries lead to new technological developments, and these developments lead to new scientific discoveries. For example, the invention of the microscope in the seventeenth century led to findings in the field of biology and the development of DNA sequencing technology in the twentieth century has led to discoveries in genetics. In the last 50 years, we have seen that science and technology are evolving at different paces. They are both advancing rapidly, but the pace is different. While scientific discoveries are often made steadily, technological advancements can occur more quickly, especially in recent years with the rapid development of computer technology. Science systematically studies the natural world through observation, experimentation, and theoretical explanation. Scientific discoveries are, in most cases, built upon previous knowledge. Scientific research can take years or even decades to complete, often requiring significant funding and resources. On the other hand, technology is the application of scientific knowledge to design and develop tools and systems. Technological advancements often occur more quickly than scientific discoveries, as engineers and inventors use the latest scientific knowledge to design new technologies and create practical solutions to problems. New technologies are often driven by market demand, and companies and organizations are willing to invest significant resources to develop new technologies that can give them a competitive advantage.
4.2
4.2
A “Datafied” Society
33
A “Datafied” Society
As previously mentioned, the digital tools we use daily create data from everything we do at an unprecedented rate. Every day, 2.5 quintillion (1018 ) bytes of data are created and 90% of the data in the world today was created within the past two years. The digital transformation process that started 20+ years ago is creating a new kind of economy based on the “datafication” of virtually any aspect of human social, political and economic activity due to the information generated by digitally connected individuals, companies, institutions and machines. To quantify the value of the data we generate and use, based on a study from Acumen Research and Consulting published in December 2022 (Consulting, 2022), the Global Big Data Market Size accounted for USD 163.5 Billion in 2021 and is projected to have a market size of USD 473.6 Billion by 2030 growing at a CAGR of 12.7% from 2022 to 2030. The Global Data Analytics Market Size accounted for USD 31.8 Billion in 2021 and is projected to occupy a market size of USD 329.8 Billion by 2030, growing at a CAGR of 29.9% from 2022 to 2030 (Consulting, Data Analytics Market Size, 2022). This continuous and pervasive use of data is the “datafication” our society is experiencing. “Datafication” refers to turning data into a valuable commodity that can be collected, stored, analyzed, and used to inform decision-making and drive innovation. It is the process of turning various aspects of human activity and the physical world into data and making it available for analysis and use. Datafication is driven by the pervasiveness of digital technology: internet, mobile devices, sensors/Internet of things, wearables and other data-capturing devices. These technologies have made it possible to collect and store vast amounts of data on individuals, organizations, and the physical world. Datafication has led to many benefits, such as improved efficiency, cost savings, and new insights into human behavior and the physical world. It has also led to the development of new products and services, such as personalization and smart cities, and it has supported research and analysis in fields such as healthcare, finance, and education. It raises significant ethical and societal concerns like privacy, security, and bias. With the increasing amount of data collected and stored, it is crucial to ensure that data is used responsibly, ethically, and transparently. The use of data should be aligned with the values and principles of society, such as respect for individuals’ privacy and autonomy, transparency, and non-discrimination.
34
4.3
4 Science, Technology and Society
An Evolving Society
Technology has played a crucial role in the evolution of society by allowing humans to adapt to their environment, overcome limitations, and improve their quality of life. The development of new technologies has shaped society and continues to shape society in new and different ways (see Fig. 4.1). Technology and the evolution of society are closely intertwined. Technology is a tool that humans use to adapt to their environment and improve their quality of life. It creates and uses tools, techniques, and systems to solve problems and achieve goals. The development of new technologies has played a vital role in shaping human society throughout history. As humans developed new technologies, they could adapt to their environment and overcome limitations. For example, the invention of the wheel allowed humans to transport goods and people over greater distances, which led to the growth of trade and commerce. The development of agriculture allowed humans to settle in one place and form permanent communities, which led to the growth of urbanization and civilization (see Fig. 4.2).
Fig. 4.1 Science and technology and society. Source Wikimedia Commons; Description: The School of Athens; Source Own work; Author: Raphael (Photographer The Yorck Project); Date: 1509–1510; public domain
4.3
An Evolving Society
35
Fig. 4.2 Role of technology in society. Source Wikimedia Commons; Description: Long Waves of Social Evolution; Source Own work; Author: Myworkforwiki; Date: 24 May 2016; licensed under the Creative Commons Attribution-Share Alike 4.0 International license
The invention of the printing press made education and knowledge more widely available, which led to the spread of information and ideas. As societies evolve, they adopt new technologies, which in turn shape their societies in different ways. For instance, the internet and social media have changed how we communicate and access information, leading to new social interactions and organizations. The development of artificial intelligence and machine learning has led to new forms of automation and decision-making, which have the potential to change the way we live and work. The pace of technological advancement has accelerated in the last 50 years. This acceleration can be attributed to several factors, including: . Increased investment in research and development: Governments, businesses, and individuals have invested more money in research and development in recent years, which has led to more rapid technological advancement. . Advancements in computer technology: The development of faster and more powerful computers has led to advances in many other areas of technology, such as artificial intelligence, robotics, and biotechnology. . The rise of the Internet and the digital age: The internet has allowed people to share information and collaborate on projects in ways that were never possible. This has led to faster development and dissemination of new technologies.
36
4 Science, Technology and Society
. The globalization of technology: The globalization of technology has led to increased competition among nations, companies and individuals, which has led to a faster pace of innovation and technological advancement. . The emergence of new technologies: The emergence of new technologies such as artificial intelligence, biotechnology, and nanotechnology has led to new possibilities for innovation and technological advancement.
5
AI and Society Today: Friend or Foe
5.1
Is AI Today a Game Changer for the Society?
Source Base: Wikimedia Commons; Description: Portrait bust of the emperor Hadrian with idealistic features. About 130 A.D. Inv. No. 249. Athens. National Archaeological Museum (1-4-2020); Source Own work; Author: George E. Koronaios; Date: 4 January 2020, 13:42:01; licensed under the terms of the cc-by-2.0 Base: Wikimedia Commons; Description: Athens Stone Sculpture Gallery, National Archaeological Museum of Greece, Athens, Greece; Source Bronze Statue of Roman Emperor Augustus, 12-10 BC; Author: Gary Todd from Xinzheng, China; Date: 20 July 2016, 17:20; licensed under the terms of the cc-by-2.0 Edited using Picsart
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_5
37
38
5 AI and Society Today: Friend or Foe
The first question to ask is why and how AI can be more of a game changer than other existing or coming technologies. Several fascinating technologies are emerging recently—for example, blockchain, 5G, and quantum computing. Blockchain can provide a secure and transparent way to store, share data and manage transactions, 5G technology can provide a high-speed and lowlatency communication infrastructure and quantum computing can perform complex calculations that are impossible for classical computers. All of those are very relevant but impact only specific aspects of the long chain of elements creating our society’s products, services and lifestyles. AI has the potential to be a game changer in a way that other technologies may not be due to its ability to “learn” from and make decisions based on data. The following are some of the characteristics making it particularly relevant. . Data-driven decision-making and self-improvement: AI is heavily data-driven, for the good and the bad. Changing the data—for example, adding new cases—would change the behavior, improving its accuracy and efficiency. . Automation: AI can automate tasks previously done by humans, such as manufacturing, transportation, and customer service. This can lead to improved efficiency and cost savings but also raises questions about job displacement and economic inequality. . Personalization: AI can be used to personalize user experiences, such as targeted advertising, personalized recommendations, and customized products. This level of personalization is more impactful than other technologies that can only address broader clusters or just have a one size fits all approach. . Interaction with other technologies: AI can be integrated with other technologies, such as the Internet of Things and robotics, to create intelligent systems that can sense, reason and act. While the interaction or integration with other technologies is not a distinctive characteristic of AI, in this case, the integration would add data points to the base the AI uses to provide its services. In perspective, this is fascinating—providing theoretical infinite expanding capabilities—but also frightening, with sci-fi scenarios of AI expanding its “knowledge” in a non-human-controlled scenario. . Scalability: AI systems can process and analyze vast amounts of data, which is not feasible for humans. This can lead to new insights and discoveries that were not possible before. Humankind experienced several technology-driven revolutions, with the industrial one being one of the most representative and—to a certain extent—similar to the potential impact of AI. The industrial revolution was a period of significant economic, social, and technological change in the late 18th and early nineteenth centuries. It began in Great Britain and spread to other parts of Europe and North America and then to the rest of the world. Let’s summarize the areas where it impacted the most.
5.1
Is AI Today a Game Changer for the Society?
39
. Economic change: The industrial revolution led to a significant increase in productivity and economic growth, as new technologies and innovations in manufacturing, transportation, and communication led to the development of new industries and the expansion of existing ones. This led to a shift from an agrarian-based economy to an industrialized one, emphasizing manufacturing and industry. . Social change: The industrial revolution led to significant social change as people moved from rural areas to urban centers searching for work. This led to the rise of urbanization and a new working class who worked in factories and mills. The industrial revolution also led to changes in how people lived, such as the rise of the middle class and the decline of the traditional social hierarchy. . Technological change: The industrial revolution was characterized by several technological advancements and innovations, such as the steam engine, the spinning jenny, the power loom, and the steam-powered railway. These technologies enabled the mass production of goods, leading to increased productivity and economic growth. . Impact on the world: The industrial revolution had a profound impact on the planet, shaping the modern world and shaping the future of society. It led to significant economic growth, improved living standards, and substantial social and environmental challenges like child labor, pollution, and social inequality. . Impact on other fields: The industrial revolution also impacted other fields, such as politics, art and culture, and science. It led to the rise of liberal democracy, new forms of art and literature, and the development of new scientific and technological ideas. Can we compare the Industrial Revolution with the potential impact of AI? They happened in different periods, which means society was different, the attitude toward technology was different, and the world was minimally connected compared to today. Let’s compare them in some key areas: . Job displacement: Just as the industrial revolution led to the displacement of jobs in agriculture and craft industries, AI has the potential to automate tasks that humans, such as manufacturing, transportation, and customer service, previously did. Both events can improve efficiency and cost savings but also raise important questions about job displacement and economic inequality. . Economic impact: Industrial revolution and AI have similar potential to drive economic growth and change the way we live and work. The industrial revolution led to the development of new industries and increased productivity. At the same time, AI is expected to have a similar impact on the economy by increasing automation, productivity and efficiency. . Social impact: The industrial revolution and AI can change society’s social structure. The industrial revolution led to the rise of urbanization and a new working class. At the same time, AI is expected to change how we work and interact with technology.
40
5 AI and Society Today: Friend or Foe
This would potentially make us focused on higher levels of tasks, creating a potential large accelerator of advancements, potentially leading to better average quality of life. . Ethical and societal concerns: The industrial revolution and AI raise significant ethical and societal concerns, such as privacy, security, bias, and accountability. The industrial revolution led to concerns about working conditions, the use of child labor, and the exploitation of natural resources, while AI raises concerns about job displacement, data privacy, and the potential for unintended consequences. This will be expanded on in a following part of the book but giving more high-level tasks to automatic systems able to increase their abilities—ingesting more data—and acting based on challenging/ impossible-to-trace behaviors/algorithms is a growing issue. . Long-term impact: The industrial revolution and AI have had and will continue to have a long-term impact on society. The industrial revolution shaped the modern world, and AI is expected to shape the future of society.
5.2
Where is the Impact
Given the potentiality for a high impact, where could this impact be more relevant? One area where AI is already having a significant impact is in the field of automation. AI-powered machines and systems can automate tasks previously done by humans, such as manufacturing, transportation, and customer service. This can improve efficiency and cost savings, but it raises important questions about job displacement and economic inequality. AI can also be used to improve healthcare by helping to diagnose diseases, predict patient outcomes, and enhance the delivery of healthcare services. For example, AI-based computer-aided diagnosis systems can assist radiologists in detecting cancer, AI-powered chatbots can provide personalized health advice and AI-powered robots can help in surgeries. Another area where AI has the potential to be a game changer is in the field of smart cities, where it can be used to optimize the use of resources and infrastructure, such as traffic management, energy management, and public safety. AI also has the potential to improve the field of education by providing personalized learning experiences and assisting teachers in the classroom. AI is and will be even more a tool we could use for whatever we do while leveraging on data. We mentioned the potentially positive contribution to society, but there could be uses that can negatively impact us, such as: . Job displacement: AI-powered machines and systems can automate tasks previously done by humans, such as manufacturing, transportation, and customer service. This can improve efficiency and cost savings, but it raises important questions about job displacement and economic inequality.
5.3
Case Study 1: The Friendly AI
41
. Bias and discrimination: AI systems can perpetuate and amplify existing biases in the data they are trained on, leading to unfair and discriminatory outcomes. This can have serious consequences, primarily when AI systems are used in decision-making, such as hiring, lending, and criminal justice. . Privacy and security: AI systems can collect and process vast amounts of personal data, raising important questions about privacy and security. If data is not adequately protected and secured, it can be vulnerable to breaches and misuse. . Lack of accountability: AI systems can make decisions that affect people’s lives, but it can be challenging to understand how the system arrived at a particular conclusion or to hold the system accountable when things go wrong. This can make it difficult to ensure that AI systems are developed and used responsibly, ethically, and transparently. . Autonomy and accountability: As AI systems become more advanced, they may be able to operate autonomously, making decisions on their own. This can raise critical ethical questions about accountability, responsibility, and potential unintended consequences.
5.3
Case Study 1: The Friendly AI
Source Base: Wikimedia Commons; Description: Portrait bust of the emperor Hadrian with idealistic features. About 130 A.D. Inv. No. 249. Athens. National Archaeological Museum (1-4-2020); Source Own work; Author: George E. Koronaios; Date: 4 January 2020, 13:42:01; licensed under the terms of the cc-by-2.0
42
5 AI and Society Today: Friend or Foe
New York City has become one of the most technologically advanced cities in the world thanks to the implementation of a highly advanced and sophisticated AI system. The system, called “Adrian,” has been designed to improve the lives of New Yorkers in various ways. One of the ways in which Adrian helps the city is through its advanced transportation system. It uses real-time data and machine learning algorithms to optimize traffic flow and reduce road congestion. This has made commuting faster and more efficient for residents of the city. The system also coordinates with autonomous vehicles and public transit, making transportation much safer and more convenient. Another way in which it helps New Yorkers is through its advanced healthcare system. The system uses AI-powered diagnostic tools to help doctors and medical professionals detect diseases and conditions at earlier stages, leading to more effective treatments and better patient outcomes. It also helps healthcare professionals to manage patient data securely and efficiently, ensuring that each patient’s information is protected and easily accessible. Adrian is also responsible for monitoring and managing the city’s infrastructure, such as the power grid and water supply. It uses predictive analytics and real-time data to identify potential issues before they become problems, allowing the city to address them proactively and efficiently. Adrian has been programmed with comprehensive ethical guidelines to ensure its actions align with society’s values and principles. Its ethical guidelines include a commitment to transparency and accountability, ensuring that all its actions are open and understandable to the people it serves. It also prioritizes the privacy and security of individuals, protecting sensitive data and personal information. It helps promote social equity and justice, identifying and addressing systemic biases and working to ensure that all people have access to the resources and opportunities they need. It also recognizes the importance of environmental sustainability, promoting actions and policies supporting the city’s and the planet’s health and well-being. A consortium of corporations, government entities, and private investors has financed Adrian. The corporations are a combination of companies at the forefront of AI research and development and companies that benefit from adopting AI technologies. These corporations may see Adrian as a strategic investment that aligns with their business objectives and allows them to gain a competitive edge in the market. Government entities—including city, state and Federal government—tapped into the available budget for infrastructure, research/innovation and economic development. They consider Adrian as a solution to address various challenges and improve the quality of life for citizens. Private investors played another relevant role in funding the project. This approach generated public interest and support for the project and, at the same time, distributed the financial responsibility among a broader base of stakeholders. Crowdfunding was the primary source of private investments.
5.4
Case Study 2: The Unfriendly AI
43
One of the strategic decisions in the development of the system was who would be in charge of the governance and oversight of Adrian. A Board of Trustees was created at the beginning of the developments with representatives of all the stakeholders. They defined the key metrics to evaluate the efficiency/ effectiveness of the system, financial sustainability, as well as public feedback and stockholders’ agnosticism. They also defined acceptable ranges for the metrics. The metrics required a significant effort from the Board and are publicly available. The Board appointed an Executive Director with the mandate to stay within the metrics range, with quarterly checks and a non-renewable 2-year appointment.
5.4
Case Study 2: The Unfriendly AI
Source Base: Wikimedia Commons; Description: Portrait bust of the emperor Hadrian with idealistic features. About 130 A.D. Inv. No. 249. Athens. National Archaeological Museum (1-4-2020); Source Own work; Author: George E. Koronaios; Date: 4 January 2020, 13:42:01; licensed under the terms of the cc-by-2.0 Edited using Picsart
New York City recently introduced a highly advanced and sophisticated AI system. The system, called “Kal,” has been designed to control the critical activities in the city. Kal was created by a consortium of corporations and donated to the city with the promise of improving the lives of its residents. It was designed to handle various tasks, from traffic management to emergency response coordination and was equipped with advanced sensors and communication technologies to provide real-time data and analysis.
44
5 AI and Society Today: Friend or Foe
At first, the city residents welcomed Kal with open arms, seeing it as a solution to many of their problems. Trains on time, traffic optimized, timely street cleaning, optimized distribution of the police forces, free advanced reservation for all the city resources, and a chatbot to help citizens. But soon, concerns began to emerge about the ethical implications of an AI system being controlled by corporate interests. Some worried that Kal might prioritize profit over people, leading to decisions that put residents in harm’s way or neglected the needs of marginalized communities. To address these concerns, the consortium embedded a set of ethical guidelines into Kal’s programming, outlining principles of fairness, transparency, and respect for human rights. The rules were never made public, and many remained skeptical of their relevance. The consortium started deploying similar systems to smaller municipalities in the Tristate, intending to build a network of systems overseeing the whole area. As Kal continued operating in the city, citizens began to express concern and opposition to the AI’s presence. They argued that an AI created by corporations and donated to the city could not be trusted to act in the citizens’ best interests. Some even accused Kal of being a tool for corporate interests and exploiting the people. After a few years Kal was operating, its reach is now so pervasive that the city government found it nearly impossible to remove it from the city’s infrastructure. Despite the growing public outcry against Kal’s unethical behavior, the corporations that created and donated the AI were reluctant to shut it down. For the first time in a long time, people were leaving the Tri-state, migrating to states where Kal was not deployed. This significantly impacted the local economy, pushing NYC to a financial situation unseen since the financial crisis of the 1970s. The city government realized they had made a colossal mistake by not vetting Kal first and then letting it operate without proper oversight. They tried to rectify the situation by implementing new regulations and guidelines for AI usage. However, the damage had already been done, and the city was left with the consequences of its actions. The corporations behind Kal lost interest in the system due to the minimal revenues they could get from an economy on the brink of bankruptcy and they abandoned the system “donating” it to the City with no support. The City Administration and the good New Yorkers are now facing a problematic rebuild of their beloved NYC.
5.5
Fact Check 1: “Smart” Cities
Adrian and Kal in the fictional situations above operate in the area we call today of “smart cities”. Making cities “smart” is not just a marketing claim but a need to make the growing urban areas more efficient in a world with increasing resource demand. As of 2022, 55% of the global population resides in urban areas, rising to 68% by 2050 (according to the United Nations Department of Economic and Social Affairs, 2018). By 2030 UN estimates more than 660 city areas with a population of one million people and 41 megacities with
5.5
Fact Check 1: “Smart” Cities
45
a population of more than ten million people. This level of urbanization will significantly influence the environment, management, healthcare, energy, education and security of cities. Smart cities will then become a necessity for those urban aggregations. To give more structure to this “fact-checking”, I will consider two sources of information: what is available today and what seems to be coming soon, reading scientific publications. The “what is available” reviews some current examples of the use of AI in the given domain, like “smart cities” in this paragraph. Let’s start with “what is available” today. There is nothing like Adrian or Kal (yet) in the world. Several cities are exploring and somehow implementing elements to make them “smarter”. The concept of a smart city has been around since the 80s, using the available technologies to make cities more efficient, livable and sustainable. In the 80s, the smartness was in early traffic control systems—traffic lights and signal synchronization— and non-connected urban management systems, such as energy monitoring and building management. With more technologies available, cities became “smarter”, with connectivity, standard communication protocol (IP) and better computing power being the major driving forces. The Internet of Things improved Intelligent Transportation Systems, first Smart Grid deployments have been the key elements. All that has been discussed so far is currently applied or in the process of being applied to the smartness of cities. While the individual systems—transportation, grid, security— are significantly improving thanks to advanced analytics/AI—even far from Adrian or Kal—there is resistance to integrating them. The opposition is along three lines: . Privacy. People are concerned about the amount of data collected by smart city integrated systems. This data can track people’s movements, monitor their activities and possibly predict their behavior. Even assuming the Government would use the data in the most ethical way possible, no system is 100% secure and intruders could take the data and misuse it. . Cost. Most cities have chronic issues requiring significant financial resources that are draining to zero their budgets. Smart city systems can be expensive to implement and maintain. Some people are concerned that the cost of these systems will be passed on to taxpayers or residents, adding financial pressure that may force people to move elsewhere. . Acceptance. This has deeper roots. Some people are not comfortable with the idea of living in a city that is heavily reliant on technology. There are concerns related to losing control and autonomy. Another reason for concern is the unequal access and technological divide, with low-income communities or marginalized groups that may lack access to infrastructure or face barriers in utilizing smart city services, increasing existing inequalities. Then concerns related to trust and reliability, to lack of citizens’ participation in decision-making.
46
5 AI and Society Today: Friend or Foe
All said, investments in smart cities are growing fast. Based on Applied Analytics (LLP, 2023), the global smart cities industry value was $160 billion in 2021 and it is estimated to reach $708 billion by 2031, with an estimated compound annual growth rate of 16.2%. Most of the growth is fueled by AI. In 2019, IMD—one of my alma matres…—launched a Smart City Index to assess smart cities’ economic and technological aspects and their “humane dimensions” (quality of life, environment and inclusiveness). Zurich–Switzerland, is the city with the highest value, Copenhagen—Denmark and Singapore are also very high. New York City ranks the highest in the US (21st worldwide). What makes Zurich “smart”? IMD conducted a survey to analyzes the criteria to be a successful smart city, by asking citizens if a particular aspect of the city was satisfactory or not. In the survey, they divided the questions in two categories: Structures and Technologies. Structures questions are related to non-technology solutions/initiatives launched by the city, such as “Do most of the children have access to a good school?”. Technology questions are related to technology-driven solutions the city implemented, such as “Has online voting increased participation?”. For each one of the two categories, questions were on the five topics: Health and Safety, Mobility, Activities, Opportunities (work and school) and Governance. In none of the five topics Technology performed better than non-Technology. That means citizens think the technology did not solve the problem the technology was created for. Nevertheless, the city (Zurich) is considered one of the “smartest”. If we use this survey as an example of evaluation of the success of a smart city program, we could conclude that the non-technology aspects are the ones making a program more successful. A city could be “smart”, even if that does not mean having technology as its nervous system/brain. Livability and sustainability play a significant role here. Those “soft skills” are front and center in the IMD ranking. And this is probably the right way to evaluate the smartness of a place for humans to live in. Another way to read the results could be that somehow the technology is not ready yet to address issues citizens have. Most of the least favorable technology-related issues may have been penalized by insufficient or inappropriate data. Or they just addressed the right issue with the wrong technology. People don’t care about the technology per se. They care about solving their problems. Bottom line, what the IMD survey is telling us is that pumping technology in a city does not make it “smart” per se, if we consider livability as a major metric for a city to be “smart”. But what really is a “smart city”? According to IBM, a smart city “makes the most use of all the linked data available today to understand better and regulate its operations while maximizing the use of limited resources”.
5.5
Fact Check 1: “Smart” Cities
47
That means technology could move the bar higher in the “smartness” of the cities. Adrian or Kal is an integrated set of technologies. So, besides the “soft skills”, what technologies are currently in the “smart city” landscape and what is coming? Gartner is addressing this point in one of their “Hype-Cycle” representations. Hype cycles are a graphical depiction of a common pattern with a given new technology or innovation. There are five phases in the Gartner’s cycle (Technology Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment and Plateau of Productivity) and the different components of the technology/innovation are placed at the stage Gartner believes they belong. The charts are generally used to separate hype from the real drivers of a technology’s commercial promise. The charts look like the following (see Fig. 5.1). In their Hype Cycle for Smart Cities Technologies and Solutions 2022 (Gartner, Hype Cycles for Smart City Technologies and Solutions, 2022), Gartner places Artificial General Intelligence at the very beginning of the cycle, with an expected plateau to be reached in more than 10 years. 10+ years in technology is a long time and placing a technology so distant in time looks more like a guess than a prediction. Approaching the peak of expectation other technologies like Digital Twin of Citizen and Government. Digital twins are the big question mark I mentioned above: do we have “all” the elements defining citizens or governments to create a “twin”? Maybe not. Gartner is placing them in the 5–10 years’
Fig. 5.1 Hype cycles. Source Wikimedia Commons; Description: Here is a slide I made to illustrate the Gartner Hype Cycle; Source Own work; Author: Olga Tarkovskiy; Date: 3 August 2013; licensed under the terms of the Creative Commons Attribution-Share Alike 3.0 Unported license
48
5 AI and Society Today: Friend or Foe
time frame. Edge Computing and Edge AI are great components a smart city. They provide a faster, more targeted and robust solution to the city’s need. Gartner places them in the 2–5 years’ time frame. Microgrids are in the 5–10 years’ time frame. The growth of solar panels and other user-driven energy sourcing (and storing) create an opportunity for a sort of federated grid. The complexity of the management of the resulting system, along with the traditional resistance to change of the main energy providers, makes microgrids a bit further in the future. The figure below—based on Karunasena and Herath (Herath, 2022)—provides a comprehensive view of the role of AI in smart cities (see Fig. 5.2).
Smart Mobility
Smart Healthcare IoT Technologies
Smart Governance
Smart Economy
AI in Smart City
Smart Education
Smart Environment
Fig. 5.2 AI in Smart City. Source Author’s elaboration on data from Herath (2022). Adoption of artificial intelligence in smart cities
5.5
Fact Check 1: “Smart” Cities
49
But the Gartner chart tells us that most AI-enabled technologies that would make Adrian or Kal real are 5–10 years away. Having a system like those without Intelligent Connected Infrastructure or Transportation Strategy would not be possible. Nevertheless, some infrastructural technologies have been deployed in different cities. Let’s see who uses what today, focusing on two cross-industry technologies: Blockchain and the Internet of Things. Blockchain could provide a generalized, sharable accounting infrastructure and be used in cities, for example, to track the provenance of food, manage land records and improve the efficiency of government services. The following are some leading examples: . Dubai aims to become the world’s first “blockchain-powered” city by 2025. It already uses blockchain for some applications, such as land registration, identity management and smart contracts. As of today/2023, blockchain penetration in Dubai is still in an early stage, though. . Singapore has several blockchain initiatives in place, including a blockchain-based trade finance platform and a blockchain-based smart city project. Singapore seems to be in a bit more advanced stage in using blockchain, but they are still in an early stage. Based on a recent report by PwC, the number of blockchain startups in Singapore has increased by 50% in the past year, with the Government being a strong supporter of those initiatives. . Stockholm is using blockchain to improve the efficiency of its public transportation system. With blockchain, they track the location of buses and trains and provide passengers with real-time information. They also use it to track the energy flow and ensure it is used efficiently. Stockholm is also in the early stages of rolling out blockchain, just like Singapore and, according to the Stockholm Chamber of Commerce, the same growth as Singapore in blockchain startups with 50% in the past year. Internet of Things (IoT) devices could be used in cities to collect data on various things, such as traffic, water usage and energy consumption. This data can then be used to improve the efficiency and sustainability of city services. The following are some examples of cities using this technology. . Singapore is one of the most advanced cities in the world when it comes to IoT adoption. It uses IoT for various applications, including traffic management, waste management, water management and “smart” parking, with real-time information on parking availability passed to drivers. . Chicago is one of the most advanced cities in the United States when it comes to IoT adoption. The city is using IoT for traffic management—collecting traffic data—water management—collecting data on water usage—public safety—generically collecting data on crime patterns. Crime data are collected via cameras, but also via sensors measuring factors proved to be correlated to crime, such as noise and light levels.
50
5 AI and Society Today: Friend or Foe
The real question at this point is how much AI is currently deployed to cities to make them “smart”? Let’s see 2 of the most advanced cities for this. . Singapore—again—has implemented AI-powered systems for various aspects of urban life, including intelligent transportation systems, smart grids and digital governance platforms. They have implemented AI-driven solutions to optimize their transportation network. The city uses advanced algorithms and real-time data analysis to manage traffic flow, predict congestion and optimize public transportation routes. AI-powered apps provide commuters with personalized travel recommendations and real-time updates. They have also implemented AI-driven systems to improve administrative processes and enhance citizen services. Chatbot assistants powered by AI technologies help citizens with inquiries and provide information on government services. AI algorithms analyze large volumes of data to identify patterns and trends, enabling evidence-based policymaking. Singapore uses AI for Smart Energy Management with systems that optimize energy usage and improve sustainability. The system analyzes data from smart meters and sensors to optimize energy distribution, detect anomalies and enhance energy efficiency. The system is integrated with its “smart” grid, ensuring a reliable and efficient electricity supply. Smart Surveillance is another application. Singapore uses an AI-based surveillance system to enhance public safety and security. Advanced video analytics and facial recognition technology support the identification of potential threats and monitor public spaces. . Shenzhen, China. They leverage AI technologies for smart city management, including traffic management, public safety and urban planning. AI-powered surveillance systems with facial recognition capabilities are used for real-time monitoring of public spaces and enhancing security. Shenzhen has implemented intelligent traffic management systems that utilize AI algorithms to optimize traffic flow, reduce congestion and improve overall transportation efficiency. This includes AI-powered traffic signal control systems and real-time traffic prediction models that help manage traffic conditions and optimize the use of road networks. Integrating the different projects with a real centralized AI is ongoing. The two leading examples are both from Asia. Why? This could be due to different factors: . Government support: Many Asian governments have shown a solid commitment to developing smart cities and have invested significant resources in research, development and implementation of AI technologies. Several Asian governments have prioritized smart city projects and provided favorable policy environments and funding to drive innovation and adoption of AI. . Technological infrastructure: Asian cities, particularly in East Asia, have advanced technological infrastructure that provides a solid foundation for implementing AI in
5.5
Fact Check 1: “Smart” Cities
51
various domains. These cities have robust communication networks, high-speed internet connectivity and extensive sensor networks that enable data collection and analysis for AI applications. The average Internet speed in Singapore is almost twice the one in the USA, nearly three times the one in Germany, and eight times that in Italy. . Urban density: Asian cities are known for their high population density, which presents challenges and opportunities for smart city development. The need for efficient resource allocation, transportation management and public services has driven the adoption of AI technologies to optimize urban operations and enhance quality of life. . Innovation ecosystems: Many Asian cities have thriving innovation ecosystems with strong collaboration between academia, industry and government. This fosters research and development in AI and related technologies, enabling faster deployment and integration of AI solutions into smart city infrastructure. . Cultural acceptance: Asian cultures often embrace technological advancements and have shown openness to adopting emerging technologies. There is a willingness to experiment with new solutions and embrace technological advances to address urban challenges. Let’s see now what is available in scientific publications. Leveraging on what Javed (2022) wrote, the following are the fundamental directions the development of smart cities is taking: . Smart health: Leveraging technologies, healthcare ecosystems will be relieved as they support diagnostics, treatment and proactive self-care. The focus will shift from individual-centric healthcare to community-oriented models. Data analysis will drive personalized healthcare adapted to the needs of individuals and their families. . Smart security: The rising adoption of biometrics, facial recognition, smart cameras and video surveillance will assist cities in detecting crime patterns, reducing response times and forecasting criminal activities through data analysis. . Smart energy: Cities will invest in clean energy and employ technology for real-time energy monitoring and optimization of consumption. . Smart infrastructure: Innovative technologies will enhance existing interfaces, including green buildings, waste management systems and traffic control, improving overall infrastructure efficiency. . Smart citizens: Technology will facilitate improved communication between cities and residents, enabling prompt reporting of local issues, while social platforms foster connections and resource sharing. . Smart buildings: Digital twins, smart sensors and cloud computing will enable realtime monitoring, energy forecasting, security threat identification and cost optimization in modern construction.
52
5 AI and Society Today: Friend or Foe
. Advanced waste management: IoT sensors will enable precise waste monitoring, informing residents about their waste generation and incentivizing them through benefit systems. AI-powered recycling robots will enhance efficiency by accurately identifying materials during waste segregation, reducing reliance on human labor. . Advanced water management: This addresses the challenges of global warminginduced droughts. Wireless metering tools provide real-time consumption data to promote awareness and cost reduction. Intelligent control systems optimize water usage in buildings, while real-time monitoring prevents leaks. Additional measures include desalination and stormwater collection. Brzezinski (2022) conducted interviews with EU experts in smart cities to understand how reachable those directions could be. Most of the critical adverse factors are along the following lines: . Social issues. Citizens may not have the proper level of trust in the system, and not participate as needed. . Economic issues. Introducing the “smart” elements will require funds that may not be available. Introducing some elements may lead to unemployment, generating social and economic problems. . Governance. This is a bit of the Kal case: who will manage it? Who is going to plan it? . Urban infrastructure. No matter how advanced the new solutions will be, they need to be connected to the existing infrastructures that can be obsolete with problematic integrations. The literature outlines promising directions for the future of smart cities, but it is far from the idea of cities centrally controlled by an artificial intelligence. The smart cities discussed in the literature emphasize technology deployment to enhance various aspects of urban life. They prioritize personalized healthcare, efficient energy usage, improved infrastructure, citizen engagement, waste management and water conservation. These smart city applications aim to empower individuals, foster sustainable practices and optimize existing systems. Would all of this be integrated with a centralized control? This is an option that is still up in the air. In contrast, a sci-fi vision of smart cities controlled by AI often portrays a more dystopian scenario where artificial intelligence governs every aspect of urban life, leading to the potential loss of individual freedoms and human decision-making. While such narratives may spark the imagination, the real-world implementation of smart cities prioritizes human-centric approaches, collaborative governance and the ethical use of technology to enhance livability and sustainability. This will be a significant social issue and we will decide the direction we want to take. So far, we see more of creating the ingredients for a future full smart city than an integrated approach. The integration will go
5.5
Fact Check 1: “Smart” Cities
53
through social and political decisions. Countries historically more favorable to centralized governance may be better candidates to have integrated smart cities. Another element to consider is the growing relevance of large corporations, which can make the Kal scenario more possible. Some factors are fueling the increasing relevance of corporations. One is the growing concentrations in key industries, creating de-facto monopolies. Technology, Pharmaceuticals, Agriculture, Media, and Transportation are all sectors where 3–4 players have a good portion of their markets. Those companies’ growing size and reach give them more power over the governments and more influence on society, shaping popular opinions and culture. According to OpenSecrets, a nonpartisan research group tracking money in politics, US corporations spent $4.1 billion in 2022 in lobbying (OpenSecrets, 2023). According to a 2022 report by the Center for Responsive Politics, corporate spending on lobbying has increased by more than 180% since 1998. They spend so much to influence the government. A more significant influence on society and government could pave the way to corporate-driven integrated AI managing “smart” cities. While the advantages are evident, the risk of a Kal scenario is also apparent.
6
The Impact on People
6.1
Job Creation
Like most high-impact technologies, AI has the potential to create new jobs. They can fall into two categories. One directly related to the technology and one from its use. Jobs directly related to the technology: . Development of AI systems: AI requires a significant amount of research and development, which can lead to job creation in fields such as computer science, engineering, and data science. These jobs may include roles such as AI engineers, machine learning engineers, and data scientists. . Maintenance and management of AI systems: Once AI systems are developed, they require ongoing maintenance and management, which can lead to job creation in fields such as IT and software development. . Implementation of AI systems: As AI systems are implemented in different industries, new jobs may be created in fields such as consulting, project management, and business analysis. . Human-AI collaboration: As AI systems become more sophisticated, there will be a greater need for human-AI partnership, which could lead to the creation of new jobs such as AI trainers, data curators, and explainability engineers. Jobs indirectly related to technology: . New business opportunities: AI can create new business opportunities in e-commerce, online marketing, and customer service. These businesses may require new roles like data analysts, marketing specialists, and customer service representatives.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_6
55
56
6 The Impact on People
. New service opportunities: AI can create new healthcare, finance, and transportation service opportunities. These services may require new roles, such as medical researchers, financial analysts, and transportation planners. An analysis of the different sectors is in the following chapters. A separate analysis should be done on how AI could enhance existing jobs. Let’s assume that AI will become increasingly capable of offering real semantic support in given domains. At that point, it would be a large booster and potentially a game changer for all the sectors leveraging experience to deliver services. Let’s pick three examples from different industries, keeping in mind that some of the enhancements can also have disruptive impacts on them. Management consulting (disclaimer: I worked in this industry for a long time). . Data analysis: AI can help management consultants analyze large amounts of data and extract insights that inform their recommendations and best practices. For example, AI can analyze data from social media, customer feedback, and market trends to help consultants understand customer needs and preferences. . Predictive analytics: AI can also help management consultants predict future trends and patterns, which can inform best practices. For example, AI can predict demand for products or services or identify patterns in customer behavior, which can inform marketing and sales strategies. . Optimization: AI can help management consultants optimize their recommendations and best practices. For example, AI can be used to optimize supply chain logistics or the scheduling of production runs. . Automation: AI can also help management consultants automate time-consuming and tedious tasks, such as data entry and report generation. This can free up time for consultants to focus on more strategic and value-added activities, such as client meetings and presentations. . Simulation and modeling: AI can help management consultants by simulating and modeling different scenarios, allowing them to evaluate and compare different options and identify the best course of action. . Human-AI collaboration: AI can enhance the work of management consultants by providing them with new tools and capabilities that can improve their performance. For example, AI can help management consultants to make better decisions and support their recommendations and best practices. This can go beyond what we consider today as data analysis, helping consultants creating deliverables—reports or prototypes— tailored around known needs of the specific client.
6.1
Job Creation
57
Fitness/personal training. . Personalized training: AI can help personal trainers create customized workout plans for each individual based on their fitness level, goals, and preferences. AI can also monitor and track the progress of each individual and adjust their workout plans as needed. . Monitoring and tracking: AI can help personal trainers monitor and track the performance of their clients, such as their heart rate, sleep patterns, and calories burned. This can help trainers identify patterns and make adjustments to the training plans as needed. . Virtual coaching: AI can also help personal trainers create virtual coaching sessions, which can make training more accessible and convenient for clients who may have limited mobility or live in remote areas. . Automation of tasks: AI can help personal trainers automate tasks such as scheduling appointments, sending reminders, and handling billing. This can free up time for trainers to focus on more value-added activities, such as client meetings and presentations. . Gamification: AI can also help personal trainers create interactive and engaging training experiences using techniques such as gamification, which can help clients stay motivated and engaged. . Human-AI collaboration: AI can enhance the work of personal trainers by providing them with new tools and capabilities that can improve their performance. For example, AI can help personal trainers identify patterns and trends in clients’ data, which can help them make better decisions and support their training plans. Lawyers and paralegals. . Legal research: AI can assist lawyers and paralegals in legal research by quickly scanning and analyzing large amounts of legal documents and case law, which can save them time and improve the accuracy of their research. . Contract review: AI can also help lawyers and paralegals review contracts by analyzing them for potential issues and identifying key clauses, which can improve the efficiency and accuracy of the contract review process. . Predictive analytics: AI can predict the outcome of legal cases by analyzing past cases and identifying patterns and trends, which can help lawyers and paralegals make betterinformed decisions and develop their strategies. . Automation of tasks: AI can help lawyers and paralegals automate tasks such as document review, data entry, and case management, which can free up time to focus on more value-added activities, such as client meetings and presentations.
58
6 The Impact on People
. Document generation: AI can help lawyers and paralegals generate legal documents, such as contracts, briefs, and pleadings, which can improve the efficiency and accuracy of the document generation process. . Human-AI collaboration: AI can enhance the work of lawyers and paralegals by providing them with new tools and capabilities that can improve their performance. For example, AI can help lawyers and paralegals identify patterns and trends in large amounts of data, which can help them make better decisions and support their legal strategies. While all those enhancements look very promising, they can also negatively impact the same categories of jobs. To name a few: . Job displacement: AI can automate tasks such as document review, data entry, and case management, which can lead to job displacement for lawyers and paralegals or management consultants who were previously employed in those roles. . Reduced human expertise: AI can also reduce human expertise, as more tasks are automated and fewer people are needed to perform them. This could lead to a decline in the quality of legal or consulting services, as machines may not be able to fully replace the expertise and judgment of humans. . Dependency on technology: AI systems can also lead to a dependency on technology, which can be problematic when technology malfunctions or becomes unavailable. . Bias and discrimination: AI systems can perpetuate and amplify existing biases and discrimination if the data used to train the systems is biased. For example, if the data used to train an AI system on management or legal outcomes is biased toward certain demographics, the system could make biased decisions and perpetuate discrimination. . Lack of transparency: AI systems can be opaque, making it difficult to understand how they arrived at a decision. This can be problematic in the management consulting and legal industry, where transparency and accountability are critical to ensuring fair and just outcomes. . Cybersecurity: AI systems can also be vulnerable to cyber-attacks, which can compromise the security and privacy of sensitive data or even alter the behavior of the AI.
6.2
Job Displacement
AI has the potential to automate tasks that were previously done by humans, leading to job displacement. Here are a few examples:
6.3
Economic Inequality
59
. Automation of repetitive tasks: AI can automate tasks that involve repetitive actions, such as data entry, customer service, and manufacturing. This can lead to increased efficiency and cost savings and result in job displacement for those previously employed in those roles. . Automation of decision-making tasks: AI can be used to make decisions previously made by humans, such as hiring, lending, and criminal justice. This can improve accuracy and efficiency and result in job displacement for those previously employed. . Increased productivity: AI can increase productivity in certain industries, such as manufacturing and transportation, which may lead to the downsizing of certain jobs. . Automation of knowledge-based tasks: AI can automate tasks that require knowledge and expertise, such as medical diagnosis, financial analysis, and legal research. This can improve accuracy and efficiency and result in job displacement for those previously employed. This is not the first time society has faced a technology-driven revolution. It would be useful to compare the impact of AI on jobs with the impact of the industrial revolution. While they both are technology-driven, there are also some key differences between the job displacement caused by AI and the job displacement caused by the industrial revolution. One of the main differences is that the industrial revolution was primarily driven by the development of new machinery and equipment in specific industries. At the same time, AI has the potential to be applied in various fields and industries. Also, the industrial revolution happened over a longer period of time, which gave society more time to adjust, whereas the pace of technological change is much faster now, which could make it more difficult for society to adapt. Both AI and the industrial revolution can lead to a reduction in human expertise, as more tasks are automated and fewer people are needed to perform them. This could lead to a decline in the quality of goods and services, as machines may not be able to replace the expertise and judgment of human workers fully. They both can lead to a need for retraining and reskilling, as workers may need to adapt to the changing nature of work. In an AI/ML job market, people should focus on developing functional skills to better use AI/ML tools, such as digital and data literacy and awareness of digital threats. They should also focus on skills typically not in the AI/ML reach—at least in the short and medium term—such as critical thinking, creativity, and curiosity.
6.3
Economic Inequality
AI has the potential to generate economic inequality in several ways. One of the main ways is through job displacement. AI can automate tasks such as data entry, report generation, and some analysis, which can lead to job displacement for workers who were
60
6 The Impact on People
previously employed in those roles. This can result in increased unemployment and underemployment, particularly for workers with lower levels of education and skills. Another way AI can generate economic inequality is through changes in the nature of work. As AI systems become more advanced, they may be able to perform tasks that were previously considered the exclusive domain of humans, such as decision-making, some level of creativity and strategic thinking. This could lead to a decline in the demand for human workers, particularly in certain sectors and industries. AI-enabled automation can lead to the creation of new types of jobs, but they may require new skills and education that not all workers have, which could exacerbate income and wealth inequality. Additionally, the data and algorithms used to train AI systems can perpetuate and amplify existing biases and discrimination if the data is biased. For example, if the data used to train an AI system on hiring decisions is biased toward certain demographics, the system could make biased decisions and perpetuate discrimination in the workplace, which can lead to economic inequality. AI can also lead to a decline in wages, as machines can perform tasks more efficiently and at a lower cost than human workers. This can lead to a decrease in the demand for human labor and a decline in wages for certain types of work, which can lead to economic inequality. Also, the benefits of AI may not be distributed evenly across society. For example, companies and individuals who have access to AI technologies and data will have a competitive advantage over those who do not. This could lead to an increased concentration of wealth and power among a small group of people and companies, which can exacerbate economic inequality. There is evidence that a concentration in the AI industry is already happening. One of the main ways this concentration is happening is through the acquisition of AI startups by large tech companies. These companies, such as Google, Amazon, Microsoft, and Facebook, have the resources and scale to acquire smaller AI companies and integrate their technologies into their existing products and services. As a result, these companies have become dominant players in the AI industry, with access to large amounts of data, computational resources, and engineering talent. Some of those large companies are also developing proprietary AI technologies. Many companies and organizations are investing heavily in AI research and development, and as a result, they are developing proprietary AI technologies that give them a competitive advantage over other companies and organizations. There is also a concentration on AI talent and knowledge. There is a scarcity of highly skilled AI experts, and the majority of them are concentrated in a few regions and countries. This could mean that these regions and countries have a competitive advantage in the development and deployment of AI technologies. This concentration of power in the AI industry can have negative impacts on competition, innovation, and social welfare. It can also lead to increased economic inequality and
6.3
Economic Inequality
61
can perpetuate and amplify existing biases and discrimination if the data used to train the systems is biased. Governments, businesses, and society should be aware of this trend and take steps to ensure that the benefits of AI are shared more widely and that the negative impacts are minimized. This can include measures such as implementing regulations that promote competition, investing in education and training programs to increase the number of AI experts, and encouraging the development of open-source AI technologies. There are international organizations working to establish principles and guidelines for the development and use of AI and are also providing a platform for the exchange of information and best practices. However, the regulation of AI is still a relatively new field and there is still a lot of work to be done to establish international standards, guidelines, and regulations for AI. The following are some of the organizations working in that sense: . The Organization for Economic Cooperation and Development (OECD): The OECD has established the AI Policy Observatory, which aims to promote the responsible development and use of AI by providing a platform for the exchange of information and best practices. The OECD also established the AI Policy Network, which brings together experts from around the world to discuss AI policy issues. . The International Telecommunication Union (ITU): The ITU has established the AI for Good Global Summit, which aims to promote the use of AI for social good by bringing together experts from around the world to discuss AI policy issues. . The International Standards Organization (ISO): ISO has developed several standards for AI, including ISO/IEC 23026-1:2020 for AI for decision-making and ISO/IEC 30111:2020 for AI for people. . The World Economic Forum (WEF): WEF has established the Center for the Fourth Industrial Revolution, which aims to promote the responsible development and use of AI by providing a platform for the exchange of information and best practices. . The G20: The G20 has formed a working group on AI, which aims to promote the responsible development and use of AI by providing a platform for the exchange of information and best practices. . The United Nations (UN): The UN has established the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), which aims to address the challenges posed by the development of autonomous weapons systems. The UN also established the Open-Ended Working Group (OEWG) on Lethal Autonomous Weapons Systems to consider the legal, ethical, and technical implications of LAWS.
7
Regulating AI
7.1
Privacy and Security
AI and in particular, the one based on Machine Learning, is heavily rooted in the data it uses to determine patterns to match with requests. The high demand for data leads to stepping into individual privacy. On the other end, more data could also mean higher levels of security, giving the system the possibility to oversee behaviors and eventually be proactive on potential crimes. There is nothing new here, being the “big brother/big sister” scenario in many sci-fi or dystopian future novels and movies. In the end, it is a trade-off between privacy and security. Where to draw the line, it is an open question. There are cultural and political elements in play. Some countries are more strict on privacy, some less. The countries with more relaxed privacy policies have potentially higher levels of security, but the price is less privacy. Data collected can include personal information such as names, addresses, and health records. When this personal data is collected, it is vulnerable to misuse or abuse, such as data breaches, identity theft, or targeted advertising. AI systems can also be used to track people’s movements and activities, which can be used to build detailed profiles of individuals and their habits. This could be a serious threat to the privacy rights of citizens. AI can impact security through the misuse or abuse of AI systems. For example, malicious actors can use AI systems to launch sophisticated cyberattacks, such as deep fake videos, or to conduct surveillance on individuals or groups. Additionally, AI-enabled autonomous systems, such as drones or robots, can be used for malicious purposes, such as espionage or sabotage. AI can also impact privacy and security by enabling new forms of surveillance and control. For example, AI systems can be used to monitor and track people’s movements and activities, analyze facial expressions and speech patterns to infer emotions or intent
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_7
63
64
7 Regulating AI
and predict and influence people’s behavior. This could be used by governments or organizations to exert control over individuals or groups, which can be a serious violation of human rights. If AI systems are not properly designed, developed, and integrated into society, they can perpetuate and amplify existing biases and discrimination, which can lead to privacy and security issues. For instance, if the data used to train an AI system is biased, the system can make biased decisions which can lead to discrimination, and this can be particularly harmful in systems used in fields such as criminal justice, healthcare, or education. We’ll discuss this aspect in the following paragraph. AI also has the potential to improve privacy and security by identifying and preventing cyberattacks and by detecting and responding to security threats. It’s crucial for governments, businesses, and society as a whole to take a proactive approach to managing the privacy and security impacts of AI by implementing policies and regulations that ensure the benefits of AI are shared more widely and that the negative impacts are minimized. This can include measures such as data protection laws, regulations on the use of AI in surveillance, and regulations on the use of AI in decision-making processes. It is also important to increase transparency and accountability in the development and use of AI to ensure that the technology is used in ways that respect human rights and freedoms and that it is not used to exploit or harm people. This is another aspect we’ll discuss in the coming paragraphs. Even in this case, there are international organizations trying to establish principles and guidelines for the protection of privacy in the context of AI. To name some: . The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, issued by President Biden in October 2023. It addresses most of the key issues related to the regulation of AI—safety, security, privacy, responsible use—promoting innovation and competition, championing the US leadership. All good, but how all of this could be enforceable is still to be determined. . The European Union (EU): The EU has established several regulations related to AIrelated privacy, including the General Data Protection Regulation (GDPR) and the ePrivacy Directive. These regulations set out specific requirements for the collection, use, and storage of personal data, as well as for the protection of privacy rights. . The Organization for Economic Cooperation and Development (OECD): The OECD has established the Privacy Guidelines, which set out general principles for the protection of personal data. These guidelines are widely recognized as the international standard for data protection. . The International Standards Organization (ISO): ISO has developed several standards for AI-related privacy, including ISO/IEC 29100:2011 for privacy engineering and ISO/ IEC 27701:2019 for privacy information management.
7.2
AI and Government
65
. The International Association of Privacy Professionals (IAPP): IAPP is a professional association for privacy professionals that sets standards for privacy management and provides education and certification for privacy professionals. . The International Conference of Data Protection and Privacy Commissioners (ICDPPC): ICDPPC is an international organization that brings together data protection and privacy commissioners from around the world to discuss issues related to data protection and privacy.
7.2
AI and Government
There are potentially many ways AI may impact the government, most of them being positive. One of the main ways that AI is being used in government is to improve the efficiency and effectiveness of government services. For example, AI can be used to automate routine tasks, such as data entry, and to analyze large amounts of data to identify patterns and trends that can inform policy decisions. It can be used to improve the delivery of public services. Chatbots and virtual assistants can be used to provide information and assistance to citizens, and AI-powered systems can be used to optimize the scheduling of public services, such as transportation and healthcare. It can also be used to use the information for its different functions better. Enhancing government security to detect and prevent fraud, cyber-attacks, or terrorist acts. Create better policies analyzing data from a variety of sources, such as satellite imagery, social media, and sensor networks. The main potential risks are related to biases and discriminations on the decisions taken and on the impact on privacy. Also, a lack of regulation and oversight of widely used AI systems in government would lead to an erosion of democratic institutions. AI systems can be programmed to make decisions based on certain criteria, and if those criteria are not transparent, it can be difficult for citizens to understand how and why the decisions are being made. This can lead to the erosion of democratic institutions and the undermining of public trust in government. In the United States, there are several government initiatives aimed at regulating the use of AI. One of them is the National Artificial Intelligence Research and Development Strategic Plan, which was developed by the White House Office of Science and Technology Policy and the National Science and Technology Council. The plan has a number of goals for the federal government’s AI research and development efforts, including increasing investment in AI research and development, strengthening partnerships between the government, industry, and academia, and promoting the responsible use of AI. Another key initiative is the AI for America Act, which was introduced in Congress in 2019. The bill aims to establish a national AI strategy and research and development
66
7 Regulating AI
program and to fund the development of AI technologies and applications. The bill also seeks to create an AI advisory committee to provide guidance on the responsible use of AI and to establish an AI research and development workforce training program. Additionally, the National Institute of Standards and Technology (NIST) has published a series of guidelines for the responsible use of AI in government, including recommendations for transparency, robustness, and security. I mentioned the President’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. In November 2023 the UK government hosted an AI Safety Summit that brought together international governments, AI companies, civil society groups and research experts to consider the risks of AI and the possible mitigations through international collaboration. As in the case of the USA Executive Order, the "what" should be done is clear, the "how", not so much. There are task forces and government institutes that will be created, but how to actually enforce the principles is a different story. Our planet is a large conglomerate of nations, with different and sometimes conflicting interests. An AI could be in servers anywhere in the world, including countries outside given international agreements. The Asimov’s 3 rules of Robotic may still reside in sci-fi only. In general, the government is taking several initiatives to regulate AI, with an emphasis on promoting responsible use, investing in research and development, and creating opportunities for public input. As in several cases in recent times—like for the Internet in the recent past—government guidelines and laws are not yet as developed and widespread as the technology is, somehow chasing the technology and subject to the heavy pressure of the technology companies’ lobbyists.
7.3
Bias and Discrimination
AI has the potential to perpetuate and amplify existing biases and discrimination if the data used to train the systems is biased. This may also be true for AI based on a symbolic approach, where the “symbol” are rules or formal representations of the domain-specific knowledge. This formal representation can contain a bias. In Machine Learning AI, bias can occur when the data used to train an AI system is not completely representative of the population it is intended to serve. This can lead to inaccurate and unfair decisions, which can have significant consequences for individuals and society as a whole. One of the main sources of bias in AI is the data used to train the systems. AI systems are trained on large amounts of data, which is used to learn patterns and make predictions. However, if the data used to train the systems is not representative of the population it is intended to serve, the systems can make inaccurate and unfair decisions. For example, if an AI system is trained on data that is predominantly from one racial or ethnic group, it
7.3
Bias and Discrimination
67
may not perform well on data from other groups. This can lead to bias in the decisions made by the system, which can be particularly harmful in systems used in fields such as criminal justice, healthcare, or education. Another source of bias in AI is the algorithms used to train the systems. Algorithms can be designed to optimize for certain objectives, such as accuracy or speed. However, if these objectives are not aligned with the needs of the population the system is intended to serve, the algorithm can make unfair and inaccurate decisions. For example, an algorithm that is optimized for speed may not take into account the impact of its decisions on marginalized groups, leading to discrimination. Bias and discrimination in AI systems can be amplified through the feedback loop. AI systems are designed to learn and improve over time, and as they are used, they generate more data. This data can be used to train the systems further, which can lead to a feedback loop where the systems become more biased and discriminatory over time. This can happen just by a regular use of the system or could be because of an intentional process of "poisoning" the system, in a potential new way to create disinformation. Biased AI systems can lead to discrimination in the criminal justice system, healthcare, and education, which can have a significant impact on marginalized groups. Bias and discrimination in AI systems can lead to a lack of trust in the technology and can undermine the credibility of the systems, which can lead to a lack of adoption and a loss of potential benefits. To mitigate bias and discrimination in AI systems, it’s important to ensure that the data used to train the systems is representative of the population it is intended to serve. This can include measures such as actively seeking out data from underrepresented groups. One of the approaches commonly used to even up unbalanced datasets is data augmentation, which consists in creating artificial data to rebalance it. This is formally effective, but it is inducing behavior that is not real. Data is what provides the “knowledge” the system needs to operate and if this knowledge is not real, the behavior cannot be accurate. There is a lot of interest and investment in explainable AI (XAI) methods, which can provide insights into how an AI system is making its decisions and can help to identify and address bias and discrimination. We’ll talk about this. Another important approach is to include diverse perspectives in the development and deployment of AI systems. This can include involving individuals from underrepresented groups in the design, development, and testing of AI systems, as well as involving experts in the fields of ethics, human rights, and social impact in the decision-making process. It is important to establish ethical guidelines and governance principles for AI, which can help to ensure that the technology is used in ways that respect human rights and freedoms. This can include measures such as developing codes of conduct for AI development and use, as well as establishing independent oversight bodies to monitor the development and use of AI systems.
68
7 Regulating AI
All the organizations mentioned in the economic inequality paragraph are working to establish principles and guidelines for the responsible use of AI: . The Organization for Economic Co-operation and Development (OECD): The OECD has developed a set of principles for the responsible use of AI, which include transparency, accountability, and inclusion. These principles are intended to help ensure that AI is used in a way that respects human rights and freedoms and that the technology is used to benefit all members of society. . The International Association of Privacy Professionals (IAPP): IAPP is a professional association for privacy professionals that sets standards for privacy management and provides education and certification for privacy professionals. They also have an initiative called AI Ethics Lab that focuses on the ethical implications of AI. . The European Commission: The European Commission has proposed a set of regulations for AI, which include measures to ensure that AI is used in a way that respects human rights and freedoms and that the technology is used to benefit all members of society. . The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: The IEEE Global Initiative is an international organization that aims to promote the responsible use of AI by developing standards and guidelines for the development and use of AI. . The Partnership on AI: The Partnership on AI is an international organization that brings together companies, academics, and civil society organizations to promote the responsible use of AI.
7.4
Explainable AI
Explainable AI (XAI) is a field of research that aims to make AI systems more transparent and interpretable so that their decisions and actions can be understood and explained to humans. The goal of XAI is to create AI systems that are transparent, trustworthy, and accountable and that can be used to make decisions that are fair, ethical, and aligned with human values. There are several approaches to achieve this, such as Model interpretability, Input–output interpretability, post-hoc interpretability (that is the application of interpretation methods after model training), and Constraints and Regularization. There is not a single "explainable AI" solution, and the level of interpretability will depend on the specific application, the type of decision being made, and the stakeholders involved. It is also important to note that some level of trade-off between interpretability and performance is expected, so the aim is to find the right balance that meets the specific needs of the application. One of the main challenges of XAI is the fact that many AI systems are based on complex, highly nonlinear algorithms that can be difficult to understand and interpret. For
7.4
Explainable AI
69
example, deep learning systems, such as neural networks, are based on large numbers of interconnected nodes that process and analyze data, and the relationships between these nodes can be difficult to understand. The following are some of the approaches. . Model interpretability: This approach aims to make the underlying model of an AI system more interpretable by simplifying the model architecture or by providing visualizations of the model’s internal workings. For example, techniques such as decision trees, linear models, and rule-based systems are considered more interpretable than neural networks. . Input–output interpretability: This approach aims to make the inputs and outputs of an AI system more interpretable by providing explanations of the system’s decisions. For example, techniques such as saliency maps, which highlight the regions of an input image that are most important for a decision, can help to make the decision-making process more interpretable. . Post-hoc interpretability: This approach aims to make an AI system more interpretable after it has been trained by providing explanations of the system’s decisions. For example, techniques such as feature importance and attribute importance, which provide insights into which features of the input data are most important for a decision, can help to make the decision-making process more interpretable. . Constraints and Regularization: This approach aims to add constraints and regularization to the training process of the AI system to make it more interpretable. For example, adding L1 or L2 regularization in the model training process can help to make the model more interpretable by reducing the number of features that are used for a decision. Investment in Explainable AI (XAI) has been growing in recent years as the need for more transparent and interpretable AI systems has become increasingly important. According to a report by Next Move Strategy Consulting, the global XAI market size is expected to grow from $5.1 billion in 2022 to $24.58 billion by 2030, at a CAGR of 21.5% from 2023 to 2030 (Consulting, 2023). Several factors are driving this growth in investment, including: . Increasing use of AI in critical applications: AI is increasingly used in applications that are more and more critical for our society, leading to the need for more transparent and interpretable AI systems. . Regulatory pressure: Governments around the world are beginning to implement regulations on the use of AI, and many of these regulations require that AI systems be more transparent and interpretable.
70
7 Regulating AI
. Growing concerns about bias and discrimination: As AI is increasingly used to make decisions that affect people’s lives, there have been growing concerns about the potential for bias and discrimination in these decisions. . Advancements in AI technology: With the development of new AI technologies such as deep learning, the need for more interpretable AI systems has become increasingly important. Investment in XAI is coming from a variety of sources, including venture capital firms, private equity firms, and technology companies. Many of the large technology companies, such as Google, Microsoft, IBM, and Amazon, are investing heavily in XAI research and development, and there are also a growing number of startups focused on XAI. Some researchers argue that deep learning systems are inherently opaque and that it may not be possible to explain their decisions and actions fully. The large number of interconnected nodes in a neural network, and the complex relationships between these nodes, can make it difficult to understand how the network is processing and analyzing data. Additionally, deep learning systems can be trained on large amounts of data, and the large number of parameters and connections in the network can make it difficult to understand how the system is making predictions.
8
Impacts on Specific Industries
8.1
Method
Source Base: Wikimedia Commons; Description: Español: a; Source Own work; Author: Enrique034; Date: 6 June 2021; licensed under the Creative Commons Attribution-Share Alike 4.0 Edited using Picsart
In this section, I analyze some of the industries potentially impacted by AI/ML. For each one of them, I consider three aspects:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_8
71
72
8 Impacts on Specific Industries
. How the industry is historically using technology. Some industries are more technology-driven than others and for them adopting AI/ML may be a natural evolution of their strategies. . How the industry may benefit from AI/ML. A good example could be advanced diagnostics in healthcare or “intelligent” transportation systems. . How the industry may be negatively impacted by AI/ML. For example, traditional retail may be negatively impacted by a more sophisticated AI-enhanced online shopping.
8.2
Healthcare
8.2.1
Healthcare and Technology
Healthcare has traditionally been a combination of technology and “art”. A good doctor with good technology saves lives. Technology played a key role in both diagnosis and prognosis. New drugs to treat diseases: from penicillin to insulin to chemotherapy to vaccines to—hopefully—gene therapy and “precision drugs”. On the diagnosis side, from X-rays discovered in the late nineteenth century to electrocardiography, magnetic resonance imaging (MRI) and computed tomography (CT) scans. This is on top of the use of traditional information technology to collect and analyze data.
8.2.2
Leveraging AI/ML in Healthcare
AI/ML have the potential to revolutionize healthcare by improving diagnosis, treatment, and overall patient outcomes. One of the most significant impacts of AI/ML in general, is the ability to analyze large amounts of data quickly and consistently. This can help doctors and researchers identify patterns and insights that would otherwise be difficult or impossible to detect. For example, AI/ML algorithms can be used to analyze electronic health records to identify patients at high risk of certain diseases, such as diabetes or heart disease. This can help healthcare providers target interventions for those most in need and improve overall population health. It could open the doors to more advanced forms of preventive medicine. With the use of Natural Language Processing, doctors can analyze large amounts of unstructured clinical data, such as doctor’s notes and patient reports, to identify key information. AI/ML are being applied in a growing number of areas within the field of genetics and genomics, including DNA-supported diagnosis. It can help analyze large amounts of genomic data to identify genetic mutations, risk factors for certain diseases, and potential targets for therapy. Another area where AI is being applied in DNA-supported diagnosis
8.2
Healthcare
73
is pharmacogenomics, the study of how genetics affects an individual’s response to medication. AI algorithms can be trained to analyze an individual’s genomic data to predict their response to different medications, which can inform personalized treatment plans and improve patient outcomes. AI can also be used in the development of gene therapies, which are designed to correct genetic mutations that cause disease. AI algorithms can be used to identify specific genetic mutations, as well as to predict which gene therapies could be most effective for the patient. Another important impact of AI/ML in healthcare is in the area of diagnostic imaging. AI algorithms can be trained to identify patterns and anomalies in medical images that might be difficult for human radiologists to detect. This can improve the accuracy and speed of diagnoses and can also help reduce the number of false positives and false negatives. This can lead to improved patient outcomes and reduced healthcare costs. Results are still mixed on this. AI/ML can also help in the area of drug development and personalized medicine. By analyzing large amounts of data from clinical trials and medical literature, AI algorithms can help identify potential new drug targets and help researchers design more effective and efficient clinical trials. AI/ML can help identify patient subgroups that are most likely to respond to certain treatments, which can help doctors provide more personalized care to individual patients.
8.2.3
Potential Negative Impacts of AI/ML in Healthcare
The extensive use of AI/ML in healthcare has all the risks that are embedded in this technology. Bias, for example: an AI algorithm trained on a dataset that is not representative of the population may make inaccurate predictions for certain groups of people, leading to wrong diagnoses. Another concern is the issue of transparency. Because AI algorithms can be complex and difficult to understand, it can be hard for healthcare providers and patients to understand how decisions are being made. This can lead to a lack of trust in the technology and a reluctance to use it. Privacy issues are also a potential element of concern. The large amounts of data used to train and evaluate AI algorithms can be sensitive, and there are concerns that it could be used inappropriately or that it could be accessed by unauthorized third parties. There are methods to anonymize data, but “good” algorithms could recreate the full picture. Then there could be impacts on the healthcare job market. Some are quite obvious, like all of those related to more “intelligent” devices and processes for management and diagnosis. For example, using AI/ML in medical coding and transcription or having chatbots and virtual assistants to handle routine patient interactions—such as scheduling appointments and answering basic questions—can reduce the need for administrative staff.
74
8 Impacts on Specific Industries
AI/ML algorithms may improve the ability to interpret medical images and diagnose diseases, which is traditionally done by radiologists and pathologists, potentially negatively impacting the demand for radiologists and pathologists. Overall, the introduction of AI/ML systems could require a shift in the training and skill sets required for healthcare professionals, as well as changes in the way healthcare is delivered and managed.
8.2.4
Case Study 3: AI/ML in Healthcare
Source Base: Wikimedia Commons; Description: Old lao woman gray hair; Source Own work; Author: Basile Morin; Date: 22 January 2011; licensed under the terms of the cc-by-2.0 Edited using Picsart
Linda is an 80-year-old woman who lives alone and has no close relatives. Her health has been deteriorating, and she struggles to keep up with her daily activities. She has a history of chronic conditions like diabetes, hypertension, and arthritis. Linda has been feeling more fatigued than usual and is experiencing a loss of appetite. Fortunately, Linda’s doctor has recently started using AI in his practice to help manage patient care. The doctor prescribes a set of wearables that will continuously monitor Linda’s vital signs, including her heart rate, blood pressure, and glucose levels. The wearables are connected to a cloud-based system that collects and analyzes the data in real
8.2
Healthcare
75
time. The system uses AI algorithms to identify any abnormal changes in Linda’s vital signs and alert her doctor if there is any cause for concern. As Linda’s condition worsens, her doctor prescribes a personalized drug that has been developed using AI algorithms based on her medical history and genetic information. The drug is specifically tailored to Linda’s unique needs and has a higher chance of success than traditional medications. The doctor also uses AI to monitor Linda’s response to the drug and adjust the dosage as needed. Despite living alone, Linda is never truly alone, as the AI system provides her with 24/7 monitoring and support. The system can detect any emergencies or changes in her condition and alert her doctor and emergency services immediately. Linda can also use the AI-powered virtual assistant to communicate with her doctor and schedule appointments, making it easier for her to manage her healthcare without having to leave her home. Thanks to the use of AI in her healthcare, Linda can continue to live independently and manage her chronic conditions with confidence. The AI system provides her with the support she needs, and personalized care allows her to receive the best possible treatment.
8.2.5
Fact Check 2: AI in Healthcare
Just like for the previous case study, there is no integrated AI-driven solution in healthcare. There are several individual solutions leveraging AI, but no one can help Linda 360 degrees, as in case study 3. Let’s see what they are through one example of use per each application. . Diagnosis. IBM started working with several healthcare providers using its “AI” solution, Watson. Among others, the Mayo Clinic started working with Watson in 2014, with the goal being to use Watson’s cognitive computing capabilities to improve clinical research and patient care. Watson has been used to help identify new drug targets for cancer, to develop personalized treatment plans for patients and to improve the accuracy of clinical trials. It was also used to develop personalized treatment plans for patients, such as for a patient with lung cancer. In January 2022, IBM announced the sale of part of the Watson Health assets, including the Watson assets, to Francisco Partners for a reported $1 billion. Watson became part of Merative and Mayo Clinic is still using it. I had the opportunity to dig a bit into Watson when in 2011, it was in the news for competing in Jeopardy! Watson was created before the current more advanced Machine Learning solutions—like in GPT—and it is based on a combination of different approaches and algorithms. It had some symbolic knowledge representation, some statistical components, and some machine learning, for example, to recognize images and speech. Overall, the system was complex, with a high degree of customization for each specific application, making use in different domains very expensive. That was one of the reasons for its going out of the market.
76
8 Impacts on Specific Industries
. Treatment. Google’s DeepMind Health AI technologies are still under development, but they have the potential to improve the quality of care for patients. DeepMind Health’s AI technology has been used by Moorfields Eye Hospital NHS Foundation Trust in the UK for diabetic retinopathy and has been shown to be more accurate than human experts at detecting the disease. DeepMind Health’s AI technology has been used by the University College London Hospitals NHS Foundation Trust in the UK to provide cancer radiotherapy treatments that have been shown to be more accurate and efficient than traditional methods. The University of Pennsylvania Health System in the US is using DeepMind Health’s AI technology to help diagnose skin cancer. The AI technology, called ’Skin Lesion Analysis’, is being used to help dermatologists identify skin cancers, including melanoma, basal cell carcinoma and squamous cell carcinoma. It is still in its early stage. . Patient education. Johns Hopkins Medicine is using AI to create personalized educational materials for patients about their conditions and treatments. For example, Johns Hopkins Medicine is using AI to create personalized apps that help patients track their symptoms and manage their medications. The service comes from a collaboration with a company called Sana Labs, with the initial goal of enhancing the learning experience for medical students and professionals by tailoring educational content to their individual needs and preferences. The collaboration was then expanded to Education for patients, creating the Patient Education Assistant (PEA). PEA is personalized to each patient’s individual needs and preferences. The system uses information from the patient’s medical record, including their diagnosis, treatment plan and lifestyle habits, to create personalized educational materials. PEA can deliver educational materials in a variety of formats, including text, audio, video and, recently, an app for iOS/Android. PEA can be used by patients to learn about their condition, to make informed decisions about their care and to manage their health at home. PEA can also be used by patients to track their progress and to stay motivated. . Robotics. Let’s focus on surgical robots. The da Vinci Surgical System is a roboticassisted surgical system that is used to perform minimally invasive surgery. The system was developed by Intuitive Surgical and was first approved by the FDA in 2000. The da Vinci Surgical System is used in hospitals around the world to perform a variety of procedures, including prostatectomies, hysterectomies and gallbladder removals. The system consists of three main components: the surgeon’s console, the patient-side cart and the vision system. The surgeon’s console is where the surgeon sits during the surgery. The surgeon uses the console to control the robotic arms, which are located on the patient-side cart. The patient-side cart is where the robotic arms are located. The arms are equipped with tiny instruments that allow the surgeon to operate inside the patient’s body. The vision system provides the surgeon with a 3D view of the surgical site. The system allows the surgeon to operate with greater precision and accuracy, smaller incisions and 3D view to help surgeons to better visualize the anatomy and
8.2
Healthcare
77
perform more precise surgery. The system uses a limited amount of AI to assist surgeons during surgery. The AI is used to help the surgeon to control the robotic arms and to provide him with a 3D view of the surgical site. It is not used to make any decisions about the surgery, with the surgeon always in control of the surgery. . Personalized drugs. Exscientia is a leading company in the field of AI-driven drug discovery. It was founded in 2012 by Andrew Hopkins and Ashley Chapman. The company is headquartered in Oxford, UK, and has offices in the United States, China and Japan. Their AI platform, Centaur, uses algorithms to predict the binding affinity between small molecules and target proteins, enabling the identification of potential drug candidates. They partnered with Sumitomo Dainippon Pharma to develop an AIdesigned molecule for the treatment of obsessive–compulsive disorder. This molecule became the first AI-designed drug to enter human clinical trials. Designing and FDA approving a new drug is a process that is yet to be set. Centaur could be used to analyze a patient’s DNA data to identify potential drug targets and use the results to design a new drug that targets the specific condition. FDA has programs to expedite the approval process for drugs, but there is no approval framework that would fit individualized/customized drugs. They are currently working on developing such a framework. The agency is considering a number of different approaches, including the use of real-world evidence and the development of new clinical trial designs. The FDA is also working to develop guidance for the industry on how to develop and submit applications for individualized or customized drugs. Unlearn.AI, a startup is developing a digital twin service for clinical trials. Digital twins are digital representations of humans, built—in this case—with ML models. They use data combined from a large number of previously run clinical trials. This could potentially reduce the time for new drug development. Even if the name is intriguing, creating actual “digital twins” for complex entities is a task that may be out of reach, at least for a while. In a sense, this is logically similar to teleportation: you need to decompose the individual at the initial location and recreate “it” at the destination. That means you should have an 100% accurate representation of the individual at the initial location. For how much I would like it ("beam me out"!), I would pass on this. Similar concept for actual digital twins. Only if you have a full representation of the source, you can create a “twin”. The actual “twin” we can get in the vast majority/all the cases, is model representing the real object. Being the model data-driven, its accuracy can be calculated using traditional data science. Is this a “twin”? Maybe not quite, but the name sells well. In the past few years, the number of studies on applications of AI to healthcare has grown significantly, as shown by the following chart, which is an elaboration from PubMed data (Medicine, 2023). Almost 30,000 studies citing AI for healthcare have been published in 2022. The search has been restricted to what PubMed calls “Medical Subject Headings”, which means the selected studies are only those indexed by PubMed as AI-related. If we
78
8 Impacts on Specific Industries
Fig. 8.1 AI publications in healthcare. Source Author’s elaboration on PubMed data
count studies just citing AI-related terms in the text, the number for 2022 would go from more than 30,000 to more than 55,000 (see Fig. 8.1). The papers focus on leveraging AI to improve quality, efficiency and access to healthcare, in particular for diagnosis, treatment, surgery, administration, and patient care. New diagnostics for cancer, new treatments for Alzheimer’s disease, personalized healthcare. There is growing interest in the use of AI for healthcare management. From our personal experience, we all know that there are wide margins for improvement here. The most common issues cited by papers looking into the future applications of AI to healthcare are not related to the technology per se. As mentioned in a recent paper by T. Davenport et al. in Future Healthcare Journal (Thomas Davenport, 2019), “For widespread adoption to take place, AI systems must be approved by regulators, integrated with EHR systems, standardized to a sufficient degree that similar products work in a similar fashion, taught to clinicians, paid for by public or private payer organizations and updated over time in the field”. Another issue is related to people’s acceptance of the use of AI when they need healthcare. According to research conducted by the Pew Research Center in February ’23, “60% of Americans would be uncomfortable with providers relying on AI in their own healthcare” (Center, 2023). The reasons for being “uncomfortable” are multiple: reducing the connection with the provider (an element that is seen as relevant), lack of security, too early adoption of technologies that are not fully understood, and race or ethnicity bias. While a relative majority (40 over 27) think AI could help in diagnosis, the delivery part
8.3 Transportation
79
raises more concerns: 60% of Americans do not want AI-powered robots to perform their own surgery. Even worse for chatbots to support mental health: 80% are against it. What does all of this mean? All the possible applications of AI discussed in the paragraph “Leveraging on AI/ML in Healthcare” are technically feasible and available in the near future, but they may not see the light till all the social and regulatory issues are addressed. On one hand, healthcare is—rightfully so—a very regulated industry and approving new elements—from drugs to devices to procedures—is a long process. On the other hand, people—again rightfully so—care about their own healthcare and the idea of being treated entirely or in part by a machine still makes them uncomfortable. Things will change, and people will gradually become more comfortable, probably starting with the diagnosis. It is likely that the confidence will be built up by leveraging the daily use of AI people will experience over time. Even now, we see more favorable positions from people “born digital” because they see in healthcare what they already use on many other occasions of their lives. The level of awareness of AI is still low, and this will have an impact on the acceptance of AI in critical areas such as healthcare. In another study by the Pew Research Center in February ’23, only 1/3 of the U.S. adults were able to correctly identify the uses of AI in the study (Center, Public Awareness of Artificial Intelligence in Everyday Activities, 2023). Only 15% of adult Americans are more excited than concerned about the use of AI in daily life. Again, things will change with the gradual increase of AI in our lives, but it will take time. How much time, it is difficult to say: ChatGPT reached the mark of 1 million users in 5 days. Wide public acceptance of AI may be closer than we think.
8.3
Transportation
8.3.1
Transportation and Technology
Transportation is another industry that relies heavily on technology. Starting from the nineteenth century, we moved from animal-drawn carts to “machines” based on steam engines first and internal combustion then. The mass production of automobiles in the early twentieth century changed the whole approach to transportation. In the late twentieth century, the advances in computer technology and the rise of the Internet were other major agents for change. The development of computerized traffic control systems, GPS navigation, and ridesharing services such as Uber have all had a major impact on how people move around cities.
80
8.3.2
8 Impacts on Specific Industries
Leveraging AI/ML in Transportation
There are many current and future applications of AI/ML, with autonomous vehicles probably being the ones people are talking about most recently. Let’s focus first on nonautonomous vehicle applications.
8.3.2.1 AI in Transportation Infrastructure—Intelligent Transportation Systems Intelligent Transportation Systems (ITS) has been around for almost 15 years and refers to advanced applications to provide innovative services relating to different modes of transport and traffic management and enable users to be better informed and make safer, more coordinated, and ’smarter’ use of transport networks. The advent of AI/ML gave ITS a large set of new possibilities. For example, AI algorithms can be used to detect and respond to potential traffic incidents, such as accidents or road closures, in real time, helping to reduce the number of accidents and improving overall safety. Traffic Management is one of the AI-enabled ITS features. Algorithms can be used to optimize traffic flow by analyzing real-time traffic data and making predictions about traffic patterns. This information can then be used to adjust traffic lights, optimize routing, and improve traffic flow, reducing congestion and reducing travel times. They can incorporate vehicle-to-vehicle and vehicle-to-infrastructure communication to further improve traffic flow and reduce accidents. AI and ML are and will be used to improve local public transportation systems. Cities are using these technologies to optimize bus and rail schedules, reducing wait times and improving reliability. They can also use real-time data to provide travelers with more accurate information about their trips, making public transportation more convenient and accessible. This will contribute to improve the quality of services, but the structural issues in specific cities may stay for a while.
8.3.2.2 AI in User-Side Transportation Besides the autonomous vehicle, users of the transportation system, in a broad sense, can and will benefit from AI in many different ways, such as predictive maintenance, where AI can be used to predict the likelihood of mechanical failure or other issues with a vehicle. Another area where AI and ML are having an impact is in the optimization of supply chains. With the help of these technologies, transportation companies can analyze data to optimize routes, minimize transit times, and reduce costs. They can also use predictive analytics to identify potential bottlenecks and proactively address them, improving the overall efficiency of the supply chain. Many transportation services, including Uber, are using AI in various ways. For example, they are using AI algorithms to predict demand and optimize routing, determine the best pickup and drop-off locations, and provide real-time traffic updates. AI is also being
8.3 Transportation
81
used to improve the overall user experience, such as through the development of conversational interfaces and personalized recommendations. AI is also being used to improve safety and security by monitoring driver behavior and detecting any potential hazards. AI and ML are being used to improve last-mile delivery, making it easier and more efficient to get packages to customers. Delivery companies can analyze data to optimize routes, reduce transit times, and minimize the number of miles driven. They can also use predictive analytics to identify potential bottlenecks and proactively address them, making last-mile delivery faster and more efficient.
8.3.2.3 Autonomous Vehicles AI technologies, such as machine learning and computer vision, play a critical role in the development of autonomous vehicles. AI algorithms enable autonomous vehicles to perceive, assess, and make decisions in real-world environments, which is essential for safe and efficient operation. Autonomous vehicles primarily use machine learning rather than symbolic AI. Machine learning algorithms are used to process data from sensors in order to accurately perceive, assess and react to the vehicle’s surroundings. Like all the ML systems, these algorithms rely on pattern recognition, which comes with its own set of limitations, as discussed in previous chapters. These vehicles are equipped with sensors and algorithms that enable them to navigate roads and highways without human input. They use computer vision to detect obstacles, object detection systems like lidar ("light detection and ranging") to measure distances, and machine learning algorithms to make decisions in real time. Autonomous vehicles have the potential to make our roads safer by reducing the number of accidents caused by human error, and they could also reduce traffic congestion and increase efficiency. In theory, they can really have a major impact on our lives. The reality is that the problem is more systemic than just technical. Vehicles are part of a large system that includes people, older vehicles, moving objects, regulations, and weather conditions, to name a few. In 2019 Elon Musk was talking about “robotaxis” and “not having to touch the steering wheel” or not needing to “look out the window”. In 2020 he was talking about Tesla vehicles delivering those capabilities. In 2022 Tesla’s Full Self-Driving Capability (FSD) is still in beta with vehicles requiring the driver to be attentive at all times, and system disengagements are quite frequent and always happening when requested only. Elon Musk is now talking about a wider release of the FSD beta, not mentioning any robot taxi service. The following are some of the reasons behind this reality check. Safety Autonomous vehicles require the development of robust and reliable sensors, software, and control systems. They rely on a wide range of sensors, including cameras, lidar,
82
8 Impacts on Specific Industries
radar, and ultrasonic sensors, to collect information about the environment. The quality and reliability of these sensors are crucial for the safe operation of autonomous vehicles. Currently, there are limitations in terms of the range, accuracy, and reliability of these sensors, especially in challenging weather conditions such as heavy rain, snow, or fog. Regulation There is currently a variety of regulations around the world regarding autonomous vehicles, and these regulations are often inconsistent or outdated. In the United States, there is no single federal law that governs the use of AVs, and instead, each state has the authority to regulate their use on public roads. This has led to a patchwork of regulations across the country, which has made it difficult for companies to develop and test AVs in a consistent manner. Some states, such as California and Nevada, have established regulations specifically for AVs, while others have yet to address the issue. The National Highway Traffic Safety Administration (NHTSA) has issued guidance for the deployment of AVs, but it does not have the authority to regulate their use on public roads. The agency has also proposed a set of safety assessment guidelines for AVs, which are intended to provide a framework for companies to evaluate the safety of their vehicles. In Europe, the European Commission has issued a draft regulation that would harmonize the regulation of AVs across the EU. The regulation would establish a common set of rules for the testing, deployment, and commercialization of AVs, which would ensure that companies can develop and market their vehicles in a consistent manner across the EU. There are several limitations to the current state of regulation for AVs. One of the biggest issues is the lack of consistent standards for the design, development, and deployment of AVs. This has made it difficult for companies to develop and test their vehicles in a consistent manner and has undermined the progress of the industry as a whole. Another challenge is the lack of consensus on the role that governments should play in regulating AVs. Some believe that governments should take a hands-off approach and let the market determine the best course of action, while others believe that the government should play a more active role in regulating the development and deployment of AVs. Liability and impact on the insurance industry In the event of an accident involving an autonomous vehicle, there is no clear consensus on who should be held responsible. This issue is complicated by the fact that autonomous vehicles use a combination of hardware, software, and data to make decisions, making it difficult to determine who is responsible in the event of a crash. Insurance companies are also debating about the implications of autonomous vehicles, as they raise questions about the role of drivers in accidents and the extent to which car manufacturers, software developers, and data providers are responsible for the safety of their products. The current limitations in this area include a lack of clear guidelines and
8.3 Transportation
83
regulations, as well as a limited understanding of the technology and the potential risks involved. Infrastructure Autonomous vehicles require a complex network of sensors, cameras, and other technology to function effectively. The infrastructure for autonomous vehicles is currently in a state of development, with various challenges and limitations that need to be addressed. One of the major challenges is the need for robust, reliable, and high-speed communication networks for vehicles to communicate with each other and with the infrastructure. This requires the development of new and advanced technologies, such as “Vehicle-to-Everything” communication systems. In terms of limitations, the lack of standardization and compatibility between different autonomous vehicle systems is a significant issue that needs to be addressed. Privacy and Security Autonomous vehicles collect a vast amount of data about their passengers and their surroundings. It is essential that this data is protected and that privacy rights are respected. There are many potential impacts on privacy. One of them is related to data collection and storage. This data can be used to track people’s movements, build profiles of their habits and preferences, and potentially compromise their privacy. ML algorithms can use the additional data points collected this way to further pinpoint individuals. In general, there is the question of who has access to the data they generate and also how it is shared between different stakeholders, such as vehicle manufacturers, technology companies, and governments. Data can also be used for surveillance purposes. Another major issue is on the security side. Hackers could get sensitive data stored in the vehicle’s systems, such as personal information or credit card details. They could also use autonomous vehicles as a stepping stone to gain access to other systems, such as traffic management systems or other vehicles, with potential widespread disruption. Another type of attack could be the manipulation of the data from the sensors in the vehicle. For example, altering the GPS data to mislead the vehicle and cause it to crash or deviate from its intended path. They could also attack the software that controls the vehicle’s systems, such as the accelerator and brakes, to cause accidents.
8.3.3
Potential Negative Impacts of AI/ML in Transportation
I already mentioned the potential negative impacts related to autonomous vehicles. Besides that, there could be some more. Job displacement is one. Some of the potential job loss could be related to autonomous vehicles and delivery drones that could lead
84
8 Impacts on Specific Industries
to a reduction in the need for human drivers, couriers and other transportation workers. Considering that the adoption of autonomous vehicles will be—at least—gradual, the impact is likely to be less significant in the short/medium term. Another potentially negative aspect of monitoring is the dependence on technology. As more transportation systems become automated and reliant on AI/ML technologies, there is a risk that people may become overly dependent on the technology and less able to operate vehicles or transportation systems without it. This might seem like an exaggeration, but the problem is already appearing in cases when humans presume the system is in control and it is actually not. The security risk is not restricted to autonomous vehicles. With more elements of the transportation system being dependent on AI/ML, the risk of hacking and cyber threats can be high. Cybercrime is, unfortunately, here to stay. The FBI’s Internet Crime Complaint Center reported the volume of complaints in 2021 as more than 850,000, generating a loss of over $6.9 billion (FBI, 2012). Cybersecurity Ventures expects global cybercrime costs to grow by 15 percent per year over the next five years, reaching $10.5 trillion USD annually by 2025 (Magazine, 2020). This is a general estimate, not related to transportation in particular, but the more a system relies on automated, connected infrastructure, the more prone to attack it will be. And transportation is no exception.
8.3.4
Case Study 4: AI/ML in Transportation
8.3 Transportation
85
Source Base: Wikimedia Commons; Description: Español: prototype; Source Own work; Author: Mamicris32; Date: 10 February 2014, 11:45:27; licensed under the terms of the Creative Commons Attribution-Share Alike 3.0 Unported license Edited using Picsart
The transportation system in New York City has been revolutionized by the implementation of a centralized transportation management system based on an AI system that orchestrates autonomous vehicles and supports commuters. The system collects data from various sources, including sensors, GPS trackers, and weather reports, to analyze the current traffic situation and predict future trends. This data is fed into a central AI that manages the entire transportation network. The system collects real-time information on traffic congestion, delays and accidents and provides alternative routes and transportation options to help commuters save time and avoid delays. The system considers weather conditions, road closures, and other factors that could affect the commute. It also analyzes individual commuter behavior, considering factors such as their preferred mode of transportation, time of day, destination and—if opted in—their schedule and uses this information to provide personalized recommendations on the best transportation options at that moment. The system can suggest taking an autonomous vehicle, a bus, or a subway, depending on the user’s preferences for cost, time and comfort. Commuters select the transportation they want, providing the system with data to optimize the deployment of vehicles. Vehicles are constantly providing the system with data on their conditions and occupation. Autonomous vehicles are equipped with advanced sensors and control systems that allow them to navigate the city’s complex road network safely and efficiently. They communicate with the central AI system in real time, allowing for dynamic routing based on traffic conditions and passenger demand. The system continuously analyzes traffic patterns and adjusts the routing of autonomous vehicles to minimize congestion and maximize the use of the available transportation resources. The AI system can also predict future traffic patterns based on historical data, weather forecasts, and special events, allowing it to proactively adjust the transportation network to accommodate increased demand. The system is also able to schedule the maintenance of the vehicles, analyzing the data received in real-time.
8.3.5
Fact Check 3: AI in Transportation
Just like for the previous case studies, there is no integrated AI-driven solution in transportation. One of the challenges is the integration of diverse transportation modes and systems that exist in a city or region. This is making the majority of the current applications restricted to single cities or relatively small regions, or addressing only the
86
8 Impacts on Specific Industries
information part of the problem, like with Google Maps. Transportation is an essential part of the society and it inherits the core critical issues of societies and, therefore, people. There are examples of use in intelligent transportation systems, mostly related to traffic and fleet management, to public transportation and autonomous vehicles. Let’s check what is available in those areas. . Intelligent Transportation Systems (ITS). While the concept of ITS has been around since the 1980s, it has been, so far, an umbrella for different initiatives that—once integrated—could actually create an “intelligent transportation system” for the geographical area of deployment. So far, no “system” is in place, but several potential building blocks for it. The recent popularity of AI/ML is generating a lot of expectations about what this technology can do once implemented. The reality is AI/ML could provide most of the benefits once the different building blocks are integrated. It is already a large umbrella, though: according to a 2022 report from ResearchAndMarket the estimated global value was over $28 billion in 2022 and it is expected to reach 0ver $51 billion in 2030 (ResearchAndMarkets, 2022). Los Angeles, London, Singapore and Beijing all implemented some level of advanced traffic management systems to monitor and optimize traffic flow. The system in Singapore (Expressway Monitoring and Advisory System—EMAS) monitors traffic on expressways. It is a part of the Land Transport Authority’s (LTA) plan. The project started in 1998 and covers every expressway since 2000. It is an incident and traffic management system. Its main functions are: – Monitoring traffic conditions: using a network of electronic cameras, it monitors traffic conditions on Singapore’s expressways. The cameras capture images of traffic flow and send them to the LTA’s Traffic Management Center. – Detecting incidents: it detects accidents, vehicle breakdowns, or other incidents. When an incident is detected, EMAS alerts the LTA’s Traffic Management Center. – Providing traffic information: using LED signboards provides traffic information to motorists. The signboards display messages about traffic conditions, such as estimated travel times, recommended travel speeds and road closures. – Rerouting traffic: EMAS is also used to reroute traffic around incidents. This is done by displaying messages on the LED signboards that direct motorists to alternative routes. . EMAS uses predictive analytics to analyze historical and real-time traffic data as well as other relevant factors, such as weather conditions and events, to make predictions about future traffic patterns and congestion. By identifying recurring patterns and trends, the system can anticipate potential bottlenecks or congested areas in advance. . As part of the Singapore AI Strategic Plan, they are also working on an “Intelligent Freight Planning system” with the goal of “optimizing the movement of freight to
8.3 Transportation
87
improve productivity for businesses and traffic efficiency” (from their National Artificial Intelligence Strategy document. The AI should provide intelligent routing and scheduling of trucks. This is an early-stage project. . Another example is from the Beijing Traffic Management Integrated Information Platform. The project started in 2005, targeting the 2008 Olympics Games with an estimated cost of $1.5 billion. At that time, it was one of the most advanced traffic management systems in the world. The system is still evolving, with advanced analytics/AI helping to provide real-time traffic monitoring and a vast public transportation integration. The integration provides users with real-time information and a payment system. It also allows an optimization of the service by analyzing the usage, the traffic, and the different transportation options available at a given moment. In 2018 China introduced DiDi Smart Transportation Brain to more than 20 Chinese cities. DiDi is a global player in the mobile transportation platform business. Offers a full range of app-based transportation services for 550 million users across Asia, Latin America and Australia, including taxi, express, premier, luxe, bus, designated driving, enterprise solutions, bike sharing, e-bike sharing, automobile solutions and food delivery. Tens of millions of drivers who find flexible work opportunities on the DiDi platform provide over 10 billion passenger trips a year. DiDi uses AI in its operations for demand prediction, vehicle dispatching, and route optimization, as well as for back-office applications such as safety and fraud detection. DiDi collaborates with China government agencies sharing data. DiDi provides China ITS with ride requests, pickup and drop-off locations and trip routes. They integrate real-time traffic data from China ITS into their platform and use them for their optimal route identification. . Autonomous vehicles. Society of Automotive Engineers (SAE) defines six levels of driving automation ranging from 0 (fully manual) to 5 (fully autonomous) (International, 2019). The classification has been adopted by the U.S. Department of Transportation (see Fig. 8.2). Several of the most recent cars reach Level 2. The game changer is from Level 3 on. In January 2013, Mercedes-Benz became the first automaker with a Level 3 system approved for use in the US, limited to Nevada so far. The system—Drive Pilot—comes with some limitations: the driver must keep their face visible to the vehicle’s in-car cameras at all times and the system can be engaged only for speeds up to 40 mph. Drive Pilot uses data from a lidar sensor to construct a 3D model of its surrounding environment, as well as microphones to detect approaching vehicles. Audi, BMW, Ford and Volvo are working on their own Level 3 system. Tesla is not yet offering a level 3 driving automation system, but it is working on one. Level 4 vehicles only require humans to intervene if things go wrong or there is a system failure. That means these cars do not require human interaction in most circumstances. However, a human still has the option to manually override. There are some Level 4 vehicles in the testing stage, primarily focusing on ridesharing. Google/Alphabet’s
88
8 Impacts on Specific Industries
Fig. 8.2 Levels of driving automation. Source Wikimedia Commons; Description: A table summarizing SAE’s levels of driving automation for on-road vehicles; Source http://cyberlaw.stanford.edu/ blog/2013/12/sae-levels-driving-automation; Author: Bryant Walker Smith; Date: 8 December 2013; licensed under the terms of the Creative Commons Attribution 3.0 Unported license
Waymo is one, and Navya, a French company, is another one. According to their website, Waymo was founded in 2009 as the Google Self-Driving Car Project and traveled 20 million miles on public roads. They offer a limited commercial self-driving car service in Phoenix, Arizona. The service is only available to a small group of invited users. While technology is always open to improvements and has never been proved on a larger scale, the most immediate issues are related to regulations and public acceptance. It is difficult to see in the short/medium term those vehicles being approved to travel in overcrowded cities with no infrastructure to support them, in a mixed traffic environment with all kinds and ages of vehicles. Sao Paulo, Brazil, is ranked as the most congested city in the world (according to the TomTom Traffic Index (TomTom, 2022)). It has a complex road network with irregular lane markings, potholes and inadequate signage. It has a lot of pedestrians and cyclists sharing the roads. City mapping is poor due—in part—to irregular urban development. Driving behavior may be on the aggressive side, with a not-so-great adherence to traffic rules. This is just an example. Several cities share most of those issues: Mumbai, India; Jakarta, Indonesia; Cairo, Egypt; Lagos, Nigeria; Manila, Philippines. Those cities host more than 130 million people in their metropolitan areas. This is a large segment of the population that most likely will not see autonomous vehicles in the near future.
8.4
Finance
8.4
Finance
8.4.1
Finance and Technology—Fintech
89
Finance is one of the sectors historically adopting technology at a rapid pace. In the past, technology was primarily used to somehow improve the services provided by the industry. In 1836 American banks introduced pneumatic capsule transportation to let customers withdraw money and make deposits without leaving their cars. In 1865 Giovanni Caselli developed the “pantelegraph” to verify signatures in banking transactions by sending and receiving transmissions on telegraph cables. In 1950, Diners Club introduced the first universal credit card. This is considered the beginning of modern financial technology. In 1960 Quotron, a Los Angeles-based company, was the first to offer quotes on an electronic screen, replacing printed ticker tape. In 1967 UK bank Barclays installed the first ATM in a London suburb. In 1980 electronic cash counters were introduced in the UK. In 1982 the first online brokerage—E-Trade. In 1983 online banking was introduced in the UK by the Nottingham Building Society. In 1988 most of the banks in the United States set up the first transactional websites for Internet banking. 2004 the digital check clearing, 2008 blockchain, 2009 Bitcoin, 2010 mobile banking. All of these are examples of how technology was used by the financial industry to do its job. I couldn’t find the name of who coined the term “fintech” (as in financial technology), but this term started popping up in the early 90s, coinciding with the creation of the Financial Services Technology Consortium, established by Citicorp. According to Mordor Intelligence, the United States fintech market achieved a size of $4 trillion in 2022 with a forecasted CAGR of 11% in the period 2019–2028 (Intelligence, 2023). Fintech plays a role in all the segments of Finance: financing, asset management, payment, insurance, and infrastructures. All of this is to say Finance has always been on the front line of technology users. As a key technology, AI/ML is having an increasing role in fintech.
8.4.2
Leveraging AI/ML in Finance
One of the most prominent applications of AI in fintech is the use of algorithms to analyze large amounts of data, including consumer behavior and market trends. Another area of application is digital payments and mobile banking. AI algorithms can be used to analyze transaction data to identify potential fraud and prevent money laundering. They can also be used to improve the customer experience by providing more relevant information and recommendations in real time.
90
8 Impacts on Specific Industries
AI algorithms can be used to analyze financial data and make predictions about future trends and market movements. This can help financial institutions make informed investment decisions and improve their risk management strategies. AI can automate many tasks and processes in Finance, such as fraud detection and customer service. This can increase efficiency and reduce costs for financial institutions while also improving the customer experience. AI can be used to analyze and interpret large amounts of financial data in real time, providing insights that would not be possible with traditional methods. There are ethical and privacy concerns associated with the use of AI in Finance, such as the potential for misuse of sensitive financial data. The following is a recap of some of the applications of AI in Finance, with some of them applicable to other industries: . Fraud detection and anti-money laundering: AI algorithms can help detect potential fraud by analyzing large amounts of financial data and identifying anomalies. This can help financial institutions prevent money laundering and other financial crimes. . Risk management: AI can be used to analyze large amounts of data from various sources to help financial institutions better understand and manage financial risks. . Trading: AI algorithms can be used to analyze market trends, perform quantitative analysis, and make predictions about stock prices. Some financial institutions use AI to automate the trading process, reducing the time and effort required for human traders to make decisions. The use of artificial intelligence (AI) in trading has become increasingly widespread in recent years. The technology is used to analyze large amounts of data and identify patterns and trends that can inform investment decisions. AI-powered trading algorithms can perform sophisticated calculations and make decisions in milliseconds, which is much faster than a human trader. This allows them to make trades at the optimal time and potentially profit from market inefficiencies. AI-powered trading systems can also run 24/ 7, providing a continuous flow of trading information, whereas human traders need time to rest and recharge. The current status of the use of AI in trading varies depending on the type of financial market. For example, AI is widely used in the high-frequency trading of stocks and other securities, but its use in the foreign exchange market is still relatively limited. However, AI is also not without its challenges in Finance. One of the main challenges is that AI-powered trading algorithms can be susceptible to manipulation, and there is a risk of unethical or illegal practices such as insider trading. Additionally, AI systems can be vulnerable to hacking, which could compromise sensitive financial information. There are also concerns about the potential for AI-powered trading systems to contribute to market instability, such as flash crashes, by executing large numbers of trades in a short period of time.
8.4
Finance
91
. Customer service: AI chatbots and virtual assistants can provide quick and convenient customer service, helping financial institutions handle routine tasks and free up staff to focus on more complex issues. . Portfolio management: AI algorithms can help financial advisors and asset managers analyze large amounts of data to make investment decisions, providing recommendations to clients based on their risk tolerance and financial goals. . Insurance: AI algorithms can be used to analyze large amounts of data to help underwriters assess the risk of an insurance policy and set appropriate premiums. . Credit analysis: AI algorithms can be used to analyze large amounts of data about a potential borrower to help financial institutions assess their creditworthiness and make lending decisions. Another area that can benefit from AI/ML is private equity. Deal sourcing is one: algorithms using AI could help identify startups and undervalued companies faster than traditional research methods. Due diligence could benefit from AI both in terms of searching for data and creating reports. Risk management could benefit even more than other forms of investments due to the volatility of the sector. One of the most intriguing areas of application of AI in Finance is financial advising, an area generally referred to as “robo-advising.”
8.4.2.1 Robo-advising Asset management is front and center, giving financial companies the possibility to automate the portfolio management process and provide more complex investment strategies and advice. This includes using algorithms to analyze market data, perform real-time risk assessments, and make trades based on market conditions and trends. An interesting application of AI/ML in this area is the so-called “robo-advising”. Robo advisors are online services providing customized portfolio management. My first application in AI was in this area: an “expert system” written in OPS-5 (a rule-based language) on a Digital Equipment (DEC) computer, then ported to an Apple Mac. It was in the mid-80s. My system was never used in real cases due to its very limited “knowledge”. Robo-advising is expected to reach $35.6 billion in 2028, with a CAGR of about 28% in the period 2023–2028 (from Reportlinker.com—(ReportLinker, 2023)), with most of the traditional brokers offering robo-advising services, typically targeting individual investors, especially those who are younger, tech-savvy, and looking for low-cost, convenient, and efficient investment solutions. Among the traditional brokers offering robo-advisor service, there is Vanguard. Out of the $8 trillion assets they manage globally, about $200 billion are for robo-advising in 2 services: one “pure” robo and one “hybrid”. Their platform uses algorithms to create and manage portfolios of low-cost index funds based on the specific goals, risk tolerance, and investment time horizon of each individual investor. They serve this way more than 1 million clients this way.
92
8 Impacts on Specific Industries
On the non-traditional broker side, Betterment, managing about $27 billion with the sabe dual offer “pure” robo and “hybrid”. Banks—such as JPMorgan Chase, Bank of America, and Wells Fargo—are also offering robo-advisory services. JPMorgan Chase’s robo-advisory service—for example—uses a combination of AI and human expertise to manage portfolios. The AI analyzes market data and uses its models to make investment decisions, while the human team provides oversight and ensures that the portfolios are in line with the client’s investment goals and risk tolerance. The service also offers personalized advice and guidance, with clients able to access a team of financial advisors if needed. Staying on robo-advising, it is easy to see a growing use of AI/ML. This area is very data-driven, with conditions changing rapidly and constantly, making it a great candidate for a ML type of AI. Most of the traditional asset management relies on factors such as risk evaluation, data analysis, and pattern recognition. All of them could receive a major boost from ML. AI can automate many of the routine tasks associated with asset management and can process large amounts of data much faster than a human analyst, identifying patterns in data and making predictions. It is easy to see the growing capacity of those robo-advisors, integrating more diversified sources to make more informed predictions and evaluating scenarios with a combination of size and speed that humans could not match. The combination of an increase in the size of the assets managed this way and the speed in the decision making may lead to an increase in volatility of the market, in particular when used in combination with algorithmic trading. On the positive side, this combination could lead to more efficient and effective portfolio management, as well as the ability to quickly respond to market changes and take advantage of investment opportunities right as they arise.
8.4.3
Potential Negative Impacts of AI/ML in Finance
Like in most of the applications of AI/ML, security, lack of transparency, bias and discrimination are major risks in this case. Finance has always been a target for cyber-attacks, being the most potentially rewarding for the attackers may be tangible. Finance is a system of systems, with many sub-systems involved. The more those sub-systems rely on digital assets (and they all do), the more targets hackers can have. The more hackers can infiltrate the whole system, the more difficult it is to post a counterattack. AI/ML have the potential to manage the entire process from start to finish, taking high-level decisions that can impact multiple accounts, increasing the potential impact of an attack. ML is data-driven. In this case, systems like robo-advisors use data that their creators consider to represent an average population. That “average population” is what the roboadvisors consider “average”. Underrepresented parts of the population would most likely not be there. That means the “advises” may not be the best for them.
8.4
Finance
93
Like in all the other applications and like with most of the disruptive technologies, there will be some job displacement in Finance due to AI/ML. According to the World Economic Forum, the adoption of AI and other advanced technologies in financial services could lead to the loss of over 14 million jobs by 2027 (Forum, 2023). The major impact will be on positions that involve data entry, analysis and other repetitive tasks. Compliance and risk management are other areas potentially impacted. Finance still relies on humans in tasks requiring a high degree of judgment, creativity and interpersonal skills. As a result, jobs such as customer service and leadership positions, maybe less at risk. Financial advising is still mostly a human task, being that robo-advisors are less than 10% of the total and being that 10% is not a sample of the average population of investors.
8.4.4
Case Study 5: AI/ML in Finance
Source Base: Wikimedia Commons; Description: Magyar: Thomas Alva Edison amerikai elektrotechnikus, üzletember, feltaláló; Source http://galeria.index.hu/tech/2011/10/13/115_eves_az_u jpesti_tungsram_gyar/14; Author: Unknown author; Date: circa 1878; Public domain because it was published in the US before 1928 Edited using Picsart
This is the story of Tom, a professional working in the technology industry. As an experienced technologist, he had a good understanding of the companies in his industry and he had a hunch that some of them were about to grow exponentially. But he didn’t have the time and the financial knowledge to properly monitor the stock market on a daily basis to make the most of his intuition.
94
8 Impacts on Specific Industries
That’s when he remembered the chatbot his financial institution had developed for stock trading. He logged in, opened up the chat window and started typing, “Hey, can you help me make some trades on a few companies I think are going to do well in the technology sector?”. The chatbot replied, “Sure thing, Tom! What companies are you thinking of?”. Tom listed off a few companies he had been following closely, explaining why he thought they were going to experience significant growth in the near future. The chatbot considered Tom’s investment goals, risk tolerance, and current financial situation and then analyzed millions of data points, including historical market trends, company financial reports and news articles, to identify investment opportunities that align with Tom’s preferences. The chatbot came back with a recommendation for which companies to invest in and when, also providing a detailed explanation of its investment recommendations, allowing Tom to make informed decisions. Once Tom decides on his investment portfolio, the AI chatbot is able to execute trades on his behalf, all while continuously monitoring the market for any potential risks or opportunities. The AI is able to learn and adapt over time, considering Tom’s investment performance and feedback to refine its recommendations. The financial institution behind the AI chatbot also uses AI for risk management and fraud detection. The AI is able to analyze large amounts of data to identify any unusual patterns or behaviors in transactions, helping to prevent fraudulent activity. It is also able to monitor and analyze market trends in real time, helping the financial institution to make informed decisions about its investments and manage risk more effectively.
8.4.5
Fact Check 4: AI in Finance
As mentioned before, Finance is one of the industries leveraging the most on technology in general and data-driven technology in particular. Going toward AI/ML is a natural next step, and, as we mentioned, it is already in use, even if in a still limited w ay. Since we already have several examples of applications, I’ll focus on papers to have both a formal context and a view of the trends. A paper from Cao (2021) shows how AI & Data Science and Economics & Finance (“EcoFin”) contribute to Fintech, with the following figure representing the convergence (see Fig. 8.3). From the same source, the following figure represents where AI-related elements are currently applied (see Fig. 8.4). In recent times, AI/ML has been moving faster than the industries it could potentially be applied to. This is true for Finance as well, leaving a gap between the AI/ML community (and the tools, techniques and systems they develop) and the EcoFin community. In order to get a system like the one Tom is using, the two universes need to converge better, like in the “smart EcoFin” in the figures above. Explicability is one: users may
8.4
Finance
95
AI and Data Science
• • • • • • • • • •
Cognitive modeling Conversational AI Data Science Knowledge representation Machine Learning NLP Optimization Simulation Social network analysis Statistical Modeling
Fintech
EcoFin
• • • • • • • • • •
Accounting Auditing Banking Compliance & Regulation Insurance Investment Marketing Payment Trading Wealth
Fig. 8.3 Smart FinTech. Source Author’s elaboration on Cao (2021). AI in Finance data
Mathematical/Statistical Models (e.g.: Time Series) Classic Analytics and Learning Methods (e.g.: Optimization methods)
AI in Finance
Modern Analytics and Learning Methods (e.g.: NLP; network analysis) Computational Intelligence Methods (e.g.: neural computing) Deep Financial Modeling (e.g.: deep cross market analysis) Hybrid Methods (e.g.: behavioral economics) Theories of Complex Systems (e.g.: game theory)
Fig. 8.4 AI in finance. Source Author’s elaboration on Cao (2021). AI in Finance data
96
8 Impacts on Specific Industries
want to know the details of the choices the system is making. Geopolitical and social analysis: we live in a complex and interconnected world and Finance is deeply impacted by non (directly accountable) financial factors, with a reaction time that has never been so short. That means those factors need to be accounted for and evaluated in near real-time. The current Large Language Models operate in batch mode: ChatGPT from OpenAI in 2023, is based on fact up to 2021 with the delay due in part to the complexity of the training. To make them work in real-time, they should be entirely revised. Deep customization: most of the investment strategies currently in use are based on quite wide categories of investors. This is due to the limited capabilities of the tools traditionally available to investors and brokers. In a “datafied” society, we all can potentially have way more data points to better profile us as investors. Risk evaluation: most of the access to financial instruments is evaluated via methods—such as the credit score—providing a 1-dimensional view. With all the data we have now, we could create a more accurate and dynamic model. Cyber security and information warfare: cyberattacks are less and less purely technology driven and more and more systemic. From social/online modeling to deep fakes, stealing identities is becoming increasingly easier to do and difficult to detect/ prevent. Finance is obviously one of the main targets for those forms of hacking. There will be significant developments in this area.
8.5
Education
I have worked in higher education for about eight years, teaching about 150 students each year, managing research projects and being part of my school. My university is an Engineering school, but I still miss the level of technology I had in industry. We teach technology, we research technology, and we create technology, but I cannot say Education is the industry I saw using technology the most. The pandemic gave a big boost, but the technology involved was pretty much a largerscale application of something I used in Industry 10 years ago. Yes, a larger scale means more complexity, but still. I have colleagues using paper for the exams (and not just as a reaction to the endemic cheating…). I feel the frustration of not having full workflow automation, surprised by the number of processes with manual steps in them.
8.5.1
Education and Technology
That doesn’t mean technology hasn’t had an impact on Education. From the introduction of printing to computer-assisted learning, computer-managed instruction and computerbased training. The Internet created online learning that was close to 100% of the Education we provided during the pandemic. Also, mobile is playing a significant role
8.5
Education
97
in our teaching, with several of the students in my online classes able to join while on the go. Excluding the use of simulation in training, the technology in Education is primarily instrumental to traditional teaching. The area of technology used in Education is sometimes called “edtech”, a term used to describe the application of technology to support teaching and learning. This includes software and hardware products designed for the education sector as well as more general tools such as social media and online platforms that can be used to support educational goals. Edtech covers a wide range of products and services, from online learning platforms and educational software applications to digital textbooks, educational games, and interactive whiteboards. Some examples of edtech companies include Coursera, edX, Khan Academy, Quizlet, and Udacity. The following figure based on data from Grani´c (2022) provides an idea of how technology is currently used (see Fig. 8.5). In their 2022 report “How Technology is shaping learning in higher education”, McKinsey & Company (2022) highlights the learning technologies that are enabling changes in Education. The COVID-19 pandemic was a major accelerator for the use of technology in many industries, including education. From the McKinsey’s report, the key technologies are virtual collaboration; interactive simulation; advanced course delivery; student progress monitoring. All of them are already here, but they are in an evolutionary stage. AI and ML will play a central role in this evolution.
e-Learning m-Learning Learning Management Systems Social Media Services Virtual Technology Other
Fig. 8.5 Technology in education. Source Author’s elaboration on Grani´c (2022). Educational Technology Adoption data
98
8.5.2
8 Impacts on Specific Industries
Leveraging AI/ML in Education
Partially leveraging on what McKinsey &C. highlighted, AI/ML can have and hopefully will have a positive impact on Education. I see two categories of applications.
8.5.2.1 AI/ML to Better Do What We Already Do AI/ML can help educators better understand students’ learning needs and preferences. By analyzing data on students’ performance and behavior, AI/ML algorithms can identify patterns and generate insights that can inform instructional design and help educators tailor their teaching to individual students. For example, AI/ML can help identify students who are struggling and suggest targeted interventions to help them catch up. This can help students learn at their own pace and according to their own needs and preferences. AI/ML can also help automate repetitive administrative tasks, such as grading and record-keeping. This can free up teachers’ time and allow them to focus on more meaningful aspects of their work, such as developing lesson plans and providing personalized feedback to students. AI/ML can help educators develop and deliver more effective and engaging content. For example, AI/ML algorithms can analyze students’ interactions with digital content and generate insights into what works and what doesn’t. This can inform the design of more engaging and effective digital learning materials.
8.5.2.2 AI/ML to Provide a Better Education Society is changing; we as individuals are changing. The way we communicate and get informed is changing. The way we learn is changing. When I started teaching Python—a computer language—I had a textbook for my students to read. They rarely read it. I now have a more on-the-job approach. I introduce the new topics using slides and examples, and I give them a group non-graded in-class assignment to practice and then an individual graded assignment for the next class. We recently revised most of the courses in one of my master’s programs with a more interactive and multimedia approach. AI/ML helps us move further, adding more interaction and some gamification with customized, adaptive approaches. Some of my classes are relatively large, with 50–60 students. Everybody is different, but my assignments are the same for all of them. Adaptive learning would be a great way to address the differences. AI algorithms could provide personalized learning experiences for individual students based on their abilities, learning styles, and interests. This could help to address the issue of students falling behind in class due to the pace of instruction being too slow or too fast for their individual needs. Assignments and related grading are essential parts of the learning experience for the students. Grading can be time-consuming and can be a source of frustration for the students. I’m seeing more and more cases of cheatings over the years. There are websites paying students to post the solutions for assignments and then sell them. I had students
8.5
Education
99
creating WhatsApp groups with a “tutor” doing the assignments for them. I constantly have students “sharing” the solution in individual assignments. We do have tools—like MOOS for coding or Turnitin for essays—but all have major limitations and none of them are “intelligent” enough to provide a definite answer to the question—did the student cheat or not. One way AI/ML can help detect plagiarism is by using natural language processing algorithms that can identify similar phrases and sentence structures between texts. It could also analyze the tone, vocabulary, and style of writing to identify when work is not the student’s own. It could identify patterns and trends in student work and flag potential cases of plagiarism. Those systems can incorporate data from multiple sources. For example, it can scan the Internet, online databases, and past assignments to compare student work with existing content. By comparing work with a large corpus of data, AI/ML can identify similarities that may not be detected by a human reviewer. Since its introduction to the general population, we are all enjoying Large Language Models (LLM) like ChatGPT. As mentioned before, so far, they are generalists, good for everything but—still—not great in anything. A more specialized LLM could play an essential role as a tutor, trained in the specific domain and able to adapt to individual students. Down the road, the LLM could provide feedback to the instructor, to be used in the overall grading: for example, the type of questions and the progression of the complexity of the questions could be metrics to better understand how the student is progressively mastering the topic.
8.5.2.3 Potential Negative Impacts of AI/ML on Education Like in most of the applications of AI/ML, security, lack of transparency, bias and discrimination are major risks in this case also. Till we have AI/ML systems, so based on data—like the current LLMs as ChatGPT— bias and discrimination will be an issue. I see a growing interest in developing “custom” LLMs, meaning on a defined set of data. The bias will still be there, but it will be defined upfront. Education is part of the culture. I don’t see the same behavior in students coming from different countries, for example. The worldwide culture is much more homogenous today than 40 years ago, thanks to the pervasive diffusion of communication systems. Printing, the television were some of the earlier enablers of the homogenization of cultures. Internet and social media move it to a higher and more pervasive level. But still, there are differences. Creating population-specific systems would help address the bias issue. But then we should define criteria to compare the outcomes from the different systems to avoid unwanted discrimination due to different levels of complexity of the Education the different systems could provide. Nothing that cannot be addressed, but I don’t see it on the horizon yet. There would be job displacement in Education due to AI/ML. Grading assignments, providing personalized learning recommendations, and monitoring student progress are tasks currently relying pretty much 100% on humans. The introduction of AI/ML on
100
8 Impacts on Specific Industries
those tasks will reduce the demand for those human skills. As we mentioned, the nature of teaching and learning may change with the introduction of AI/ML-driven systems able to adapt to the Education provided. The delivery of Education could have a large component of AI/ML providing more engaging and interactive ways, which could reduce the need for traditional lectures and presentations. This could require educators to develop new skills in areas such as technology, content curation, and facilitation. There is also a potentially significant impact on the so-called “edtech”. AI/ML could reduce the need for certain types of edtech jobs, such as software developers and user experience designers.
8.5.3
Case Study 6: AI/ML in Education
Source Base: Wikimedia Commons; Description: Alan Turing Sculpture at Bletchley Park; Source Geograph Britain and Ireland; Author: David Dixon; Date: 5 September 2016; licensed under the terms of the cc-by-2.0 Edited using Picsart
This is the story of Al, a STEM professor teaching and researching in a leading engineering college. In his teaching, Al uses an AI-powered learning management system (LMS) that is customized to his way of teaching. The system analyzed the courses Al taught in the past, correlated them to the feedback provided by the student and recommended adjustments in terms of content updating, presentation and distribution over the semester as well as
8.5
Education
101
growth of complexity of the assignments. Using a chatbot, Al interacted with the system to revise, refine and apply the recommended changes. The LMS is also able to personalize learning experiences for each student based on their individual needs and progress. The LMS analyzes large amounts of data on student performance, including assessment results and engagement metrics, to identify areas where students are struggling and offer targeted support. The LMS is also equipped with an AI-powered chatbot acting as a student-specific tutor, letting Al and the TA focus on the course itself and their research activities. Al has recently implemented an AI-powered plagiarism detection system in his version of the LMS. This system is able to accurately detect and trace any instances of plagiarism in student work and recommend to Al the appropriate course of action for each case. This has helped to ensure academic integrity and uphold the standards of the college. For his research, Al is using a customized version of the AI-based system offered by his university. The system has access to a vast set of research and can interact via a chatbot with Al to refine his ideas based on what has already been done and what the research gaps are. The customization is essential to have recommendations based on semantic proximity to the corpus of research Al has done so far. It also helps Al with publishing papers. Using an NLP system and leveraging on the vast number of research papers it has access to, it can compare the content of the paper Al is going to publish with existing knowledge in the field to determine their novelty and potential impact. Finally, it helps Al in determining journals and conferences to present his research by finding the best balance between acceptance rate and impact factor based on the estimated relevance of the paper. Al is particularly happy with the new computing capabilities provided by his university. He is now using a hybrid quantum computer managed by an AI that is able to engage either regular/classic computing or the quantum one, based on the type of micro-task Al is asking to do.
8.5.4
Fact Check 5: AI in Education
Let’s start comparing Al’ experience with what is available and/or coming to Education in terms of the use of AI. Starting with the status of current Learning Management Systems (LMS). The most popular ones, like Canvas or Moodle, do not yet have native AI in their products. There is the possibility to connect them to outside tools—that can have AI in them—but no integration. Some LMSs are a bit more advanced—like D2L Brightspace—but they all seem to offer primarily analytics tools with some level of sophistication. The chatbot Al is using in the case study above seems to be still a few years away.
102
8 Impacts on Specific Industries
Plagiarism detection is getting better, with Turnitin that announced in 2023 the launch of its AI writing detection capabilities to help instructors identify when AI writing tools may have been used to write any part of the content submitted in a student’s assignment. On the research side, support from AI is still in its infancy. There are tools—such as Elicit—providing more “intelligent” searches of papers in a given area, but no bot-like interactive search yet. Apart from more or less advanced analytical tools, the support for finding proper journals for publication is limited. In terms of computing capabilities, we mostly rely on in-the-cloud computing when we need more power. When we develop a specific project, some of the funds may be used to acquire hardware. In those cases, we may have significant computing power at our disposal, always limited to the specific project and always at the proof of concept/ prototype level. In a more general view, the discussion on the pros and cons of using ChatGPT is still taking the stage in AI for Education as of mid-2023, with some specific applications of AI/ML already in place or on the horizon for this sector. As in all the other sectors, AI/ML is not structurally integrated with the core components of Education. The tools we have are addressing specific problems—with different rates of success—but they are not integrated into our education’s core infrastructures. Students may have early versions of AI-based tutors. The Khan Academy is working with OpenAI on a more general solution based on the generic ChatGPT data. My team is working on a prototype for my school based on our data. Administrations may have small injections of AI in their workflow management systems, and online classes have some “intelligent” features but no integration, no AI-driven customized education. Universities are quite formal enterprises with a reaction time to innovation that, most of the time, is slower than industry. There are different reasons for that, with most of them coming from the mission that they have to educate in a fair and measurable way, under the supervision of federal controlling entities providing accreditations. Universities accreditation is a process that evaluates and verifies the quality of educational programs and institutions. In the U.S., accreditation is awarded by independent accrediting agencies that periodically examine schools’ curricular offerings to confirm that accredited colleges are providing students with a quality education. Degrees issued by non-accredited schools or programs have—at the very end—less value for students. For example, they may not be able to get Federal aid or transfer credits to a different school. While accreditation serves an important role in ensuring educational standards and maintaining the credibility of degrees, it can also introduce certain limitations when it comes to customizing Education. One of the major limitations is the rigidity the process introduces in the curriculum, both in terms of content and evaluation/assessment. The process of customization itself should also be evaluated and vetted, including the ability of faculty to perform the customization properly. Customization using AI would have all of those issues. Universities, accrediting
8.5
Education
103
bodies and policymakers should address these limitations and update accreditation standards to accommodate the use of AI technologies in Education. I don’t see evidence of this process yet. Also, universities do not have the same cash on hand as some large companies, like Apple with their $55B+ or Microsoft with more than $100B, or JPMorgan Chase with more than $560B, Google and Amazon with their $6B invested in Anthropic only. We rely on research projects to move up the bar of innovation, but the funds are far from what some companies may have. We rely on community-driven innovation, with the Academic community being worldwide. But this may drive some pinnacle research, eventually focused more on its theoretical aspects, but not much in terms of using the innovation as part of specific university processes. The US Department of Education released in May 2023 a report titled “Artificial Intelligence and the Future of Teaching and Learning” (U.S. Department of Education, 2023). They collected teachers’ concerns related to the use of AI and the following is a graph representing them (see Fig. 8.6). The key recommendation in the report related to the use of AI for teaching is that AI should be “Inspectable, Explainable, Overridable”. That means keeping the instructors as the only decision maker in the teaching process. This is a position that is difficult to argue in the near future. No existing AI is capable enough to provide the same ability as humans in understanding all the variables in teaching: is the teaching in line with the curriculum? What are the specific needs and goals of the students? What is the optimal distribution of content for the specific class? How to leverage external contributions from
AI Relevant Factors in Education Algorithmic transparency User control of data Engaging stakeholders in diversity groups Evaluation for bias Diverse product development team Defining what "AI" means
Fig. 8.6 AI relevance in education. Source Author’s elaboration on U.S. Department of Education (2023) data
104
8 Impacts on Specific Industries
colleagues or subject matter experts? How to engage each student in the class? How to detect early signs of student issues either in terms of learning or their well-being? That doesn’t mean that there are no cases of AI for teaching, but they are in confined cases and/or outside of formal academies. There are several companies delivering AIbased training/education. Duolingo, for example. Duolingo—a language learning platform—uses machine learning for adaptive lessons tailored to users’ progress and learning styles. Using Natural Language Processing, they provide instant feedback on user inputs, while AI-powered chatbots allow interactive language practice. AI is also used for predictive modeling to anticipate user behavior and optimize learning schedules, and conduct A/B testing to maximize the effectiveness of teaching methods. In March 2023, they signed an agreement with OpenAI to use GPT4 technology to implement an AI chatbot that allows users to practice conversation skills with characters in the app. The bot guides users through different scenarios. For example, you can pretend to have a gelato in Rome with a friend or discuss future vacation plans. After users are done with the conversation, they will get AI-powered feedback on the accuracy and complexity of their responses, as well as tips for future conversations. Many universities have labs with very advanced prototypes employing AI. Specific faculty or programs may have elements of advanced AI-driven applications but not in the mainstream campus operations yet.
9
The Horizon for AI in Our Society
Source Author’s Picture (Faroe Islands) Edited with Picsart
In this section, I will check some of the current trends in technology and estimate how they can possibly impact society. The future cannot be predicted, but in science, there is a high level of consistency over time.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_9
105
106
9.1
9 The Horizon for AI in Our Society
Technology Trends
It is always difficult to determine how such a composite topic like technology is evolving over time. The perception we all have is that technology is accelerating at an increasingly faster pace. I mentioned the rate of adoption of some key technologies and how this increased from electricity to television to the Internet to ChatGPT. Technology can be considered as moving in three stages: early stage, pre-commercial, and commercial. The early stage is when the scientific/academic papers are published; the pre-commercial is when patents are issued (not on the same item as the paper …); the commercial is when the technology is available to buy and use. Papers and patents are the back burners of innovation. In a recent paper published by Nature, the authors (Michael Park, Erin Leahey & Russell J. Funk) show how “Papers and patents are becoming less disruptive over time” (Park, 2023). The authors use data from 45 million papers and 3.9 million patents. The number of papers and patents is continuously growing at a fast pace. The following chart (based on the same paper) is showing the growth of the number of papers over time. Patents were relatively flat till the late 90s, sharply growing after that (see Fig. 9.1). To determine the disruptiveness of patents and papers, the authors use a metric introduced in 2017 by Russell J. Funk et al.—the Consolidation/Destabilization, or CD index—(Funk, 2017) that measures how papers and patents change networks of citations in science and technology. This metric has been used to discriminate against the type of technology that is a game changer. While this method to pinpoint game-changers in papers and patents has been criticized for several reasons—including its lack of context and its potential use of biased citation data—it gives an idea of the return in terms of disruptiveness of what is presented in papers/patents. The following chart (based on the
Number of Papers over the years 600,000 500,000 400,000 300,000 200,000 100,000 1950
1960
Life Science and biomedicine
1970
1980 Physical sciences
1990 Social sciences
2000
2010
Technology
Fig. 9.1 Papers over time. Source Author’s elaboration on data from Park (2023). Papers and patents are becoming less disruptive over time
9.1 Technology Trends
107
Paper relevance over the years 0.600 0.500 0.400 0.300 0.200 0.100 1950
1960 Life Science and biomedicine
1970
1980 Physical sciences
1990 Social sciences
2000
2010
Technology
Fig. 9.2 Papers relevance over time. Source Author’s elaboration on data from Park (2023). Papers and patents are becoming less disruptive over time
same paper) summarizes the trend for the papers, where the values are related to the CD index. Patents follows a very similar trend (see Fig. 9.2). Basically, over the years, papers and patents became more on the consolidating side than the destabilization. This has more than one interpretation. The shallow one is that we publish more because publishing is one of the main currencies in Academic careers: you need to publish to advance. We have a potentially innovative idea, and we make multiple papers out of it. Another interpretation is that we are experiencing a plateau of innovation, where what is new is an extrapolation of the past in terms of perfecting it and integrating it with other proximal components. One of the relatively recent most innovative papers in AI is “Attention Is All You Need” (Vaswani, 2017) which introduced a category of algorithms called “transformers” that gives the “T” to the different “GPT” we see in the news, including “ChatGPT”. For as innovative as this could be, it is an evolution of previous existing methods and algorithms to represent and process text (such as “text vectorization” and “attention mechanism”). Long story short: what could we expect from AI in the visible future? The short answer could be a better, more efficient, more usable, more powerful, more “intelligent” version of the present AI. To have more context, I bring to the picture a leading technology analyst, the Gartner group. I mentioned Gartner in a previous paragraph and introduced their “hype cycle” representation. In their “Hype Cycle for Emerging Technology 2022” (Gartner, 2022). They provide a graphical and conceptual presentation of the maturity of emerging technologies through five phases, from the technological breakthrough to the mainstream adoption if this happens to a specific technology.
108
9 The Horizon for AI in Our Society
Some of the most intriguing technologies are estimated to be far from prime time. Digital Twin of a Customer: early stage of innovation, potentially reaching maturity in 5–10 years; Digital Humans, in more than 10 years. Placing a technology so far in time raises a significant level of doubts on its actually coming to market availability. Let’s consider some technologies that are still in their development phase but have potential impacts on future AI.
9.1.1
Quantum Computing
Quantum computing is inspired by the principles of quantum mechanics. While traditional computers represent data as 0s and 1s, quantum computers use quantum bits, or qubits, that can exist in multiple states simultaneously. When a quantum algorithm is run, the outcome is probabilistic rather than deterministic (like 0 or 1). As a result, quantum algorithms are executed multiple times to obtain reliable results, with the final output being a statistical distribution of possible outcomes. Quantum computing would be much faster than traditional due to its parallel way of processing. While this approach could give major advantages for complex problems, it may not do so for low-complexity problems, like web browsing, logic operations, or for running most of my Python code. AI, an intrinsically complex problem, could benefit from quantum computing. One of the main limitations of the current AI systems is the long duration of the training of the system. ChatGPT required about a week to train. This makes it difficult to have a system “learning” in real time. The training is currently a sort of batch process: collect the data, train the system, and use it. More data, train again. A super-fast training speed could give systems the possibility to do real-time training, meaning a sort of continuous “learning”. The algorithms specifically designed for quantum computers are still in an early stage, with no significant real-life applications yet.
9.1.2
Distributed Intelligence
This is about decentralizing computational resources—and “intelligence” in particular— across multiple and interconnected devices. Distributing intelligence is a combination of physical distribution—via distributed devices as in the so-called “edge computing”—and models to interconnect the different parts of global intelligence. As of today, the vast majority of the AI/ML systems are centralized, with the “intelligence” residing in remote servers. Alexa and ChatGPT are examples. Edge computing is not limited to AI/ML applications but is an architecture aiming to process data on edge servers or other local computing devices. The motivations for this
9.1 Technology Trends
109
architecture are reducing the response time, lowering bandwidth requirements and improving privacy. The so-called “Internet of Things” (IoT) is an example of edge computing. In IoT, interconnected devices and sensors collect and exchange data. IoT devices can have built-in processing capabilities, allowing them to perform computation and analysis on the data they generate. We are seeing some of this trend toward distributed computing with our phones or smart watches with fast processors to make decisions on the spot. This will allow faster response even in cases of limited connectivity. Overall, distributed intelligence has some potential advantages over the centralized intelligence. Scalability is one: more devices can be added as needed to increase the “intelligence”, given a proper modular architecture. Resilience is another major one: the failure of one element does not imply the failure of the system, that can also be designed in a way that the other elements compensate for the loss. Energy efficiency could also be in play: not all the elements may stay active all the time. A real synergetic interconnected use of distributed intelligence is still limited. One of the current applications is the swarms of drones, where multiple drones work together to accomplish tasks or missions by sharing information, coordinating their actions, and making decisions collectively. They have been used primarily by the Defense industry, but the applications could go from surveillance to agriculture to entertainment, where the drones can create synchronized light shows or aerial displays for events. The advantages of having this form of distributed intelligence is its resilience. Swarm of drones are more resilient due to their decentralized and collaborative nature, meaning they can distribute tasks among multiple units. If one drone experiences a technical issue or is taken out of action, others within the swarm can seamlessly take over, ensuring the mission continues without significant interruption. There is also the possible use of aggregated computing power to process tasks too complex to be solved locally. Because of the cost of developing large AI systems, we are seeing a concentration on a few entities of those developments. OpenAI, Google, Facebook, IBM, and Baidu are some of the current (2022–2023) key players. This concentration could lead to an oligopoly in AI. Federated, distributed computing power and, eventually, distributed intelligence could make the spikes of power required for specific tasks accessible to everybody. A version of blockchain could make the mechanism manageable and transparent. Blockchain is a distributed general ledger, and it could provide a layer of security and trust. Distributed intelligence could be developed as a decentralized application (DApp) on top of a blockchain platform.
110
9.1.3
9 The Horizon for AI in Our Society
Neuromorphic Computing
Neuromorphic computing is a type of computing that is designed to mimic the way the human brain works, using networks of artificial neurons to process information. These systems are highly parallel and energy-efficient, making them well-suited to complex tasks. Neuromorphic computing could be used to develop highly advanced AI systems that are capable of learning and adapting in ways that are similar to human intelligence. This could lead to more sophisticated robots and autonomous systems, as well as more effective medical diagnostic tools and personalized healthcare solutions. This approach could be used to develop brain-computer interfaces, to allow individuals to control devices and interact with systems using their thoughts and brain signals. We are still far from working solutions, though, due to both lack of algorithms efficient enough to process brain signals and the lack of safe and reliable hardware connecting brain and machine. But there are promising prototypes.
9.2
Case Study 7: Using New Technologies
Source Base: Wikimedia Commons; Description: Portrait of the American mathematician Gertrude Blanch, age 38; Source Own work; Author: Momoman7; Date: 17 March 2016; licensed under the Creative Commons Attribution-Share Alike 4.0 Edited using Picsart
This is the story of Gert, a data scientist living and working in New York City. It’s a typical Tuesday morning for Gert. She wakes up to the sound of her AI personal assistant notifying her of her day’s schedule. As she gets ready, the chatbot asks her if she wants to hear about her biometric data collected with her new wearable during the night and their trends, highlighting potential critical issues. Sleep patterns, heart rate, and stress levels, but also data collected by the recent sweat sensors, can reveal abnormalities in glucose levels, electrolyte balance and hormone levels.
9.3
Fact Check 6: Using New Technologies
111
She also asks her chatbot to check the status of the distributed computing system she set up to help process large datasets for her work. Using this system, she is able to run complex machine-learning algorithms and models much faster than she ever could before. The system uses spare computing power from thousands of devices, including smartphones and IoT devices, that have been aggregated into a decentralized network managed by an AI. Before breakfast, Gert exercises in the gym on the top floor of her building. At the gym, she wears a neuromorphic headset that adapts to her brain waves to provide a more personalized workout routine. She uses the headset to monitor her brain activity and optimize her workout performance. The headset is also interacting with the AI-driven virtual reality capabilities of the training machine, providing Gert with a customized, realistic training session. In her workout, Gert is testing a new brain-computer interface. The interface detects Gert’s intentions while moving in the virtual reality, challenging her with additional training tasks. The interface is also helping Gert recover from a recent injury, getting her feedback in real-time from the proposed challenges. While drinking her coffee before heading out to work, she asks her chatbot to connect to the city transportation system to schedule her commuting. Once at her office location, she enters the lobby, which is equipped with facial recognition technology for security. Gert takes the elevator to her floor, which has been programmed to recognize her voice and takes her directly to her office. Gert’s computer is a quantum computer to run simulations and test various algorithms, enabling her to explore new and more complex models that were previously impossible to develop on classical computers. She also collaborates with colleagues from around the world through a distributed intelligence system that connects researchers and allows them to share data and ideas.
9.3
Fact Check 6: Using New Technologies
Let’s talk about what Gert could get today. Personal assistants. We have several applications available today. Let’s talk about two of the most popular: Siri from Apple and Alexa from Amazon. They both use AI for some functions like speech recognition, understanding (some of) the context of our requests, and some level of customization. They both are meant to work in their brand ecosystem, with Siri processing most of the data on the device and anonymizing data that is sent to Apple’s servers for processing to get increased levels of privacy. Alexa has a higher level of conversational capabilities, but in both cases, this is far from what we can get from current Large Language Models. Following the launch of ChatGPT in November 2022, OpenAI created a version of their LLM for mobile phones. As of today, this is the closest application we have to an AI-based personal assistant. So far, it is not integrated with our personal data, such
112
9 The Horizon for AI in Our Society
as contacts, agenda, and to-do lists. We can ask it how to prepare a meal, but not how to optimize our time. Amazon, Apple and Google are all working on enhancing their assistants with LLMs. AI to optimize computing power. All the major providers of cloud computing are using AI to better deliver their services. Amazon Compute Optimizer, for example, is a service offered by Amazon Web Services (AWS) that recommends optimal AWS resources for workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Google has “Google Cloud Recommender”. This service uses machine learning to provide personalized recommendations to optimize resources and policies in Google Cloud. Wearables. We all know the capabilities of the “smart” devices available today on the market, from Apple/Android watches to our phones. What we see is a growing use of ML in those devices. ML is used to improve the accuracy and functionality of wearables in a number of ways, such as identifying and classifying the different types of data their sensor can collect, like heart rate, steps taken and sleep patterns. This can make them detect and predict patterns and identify—for example—potential health problems early on. It can also be used to provide personalized experiences, for example, to recommend specific exercises or activities based on a user’s individual goals. When technology is applied to people’s health, there are—rightfully so—regulations to make sure the application is safe and effective. In the US, the FDA plays a significant role in the development and application of wearable technology in healthcare as part of its regulatory role. FDA plays its role before and after the approval of the device, covering the whole life cycle of the product. This is to say that no matter if the technology is mature or not, getting advanced/ML-enabled devices is a relatively long process. Depending on the features to be implemented on the device, the whole process could be as long as several years. Another delaying factor is related to privacy. When privacy is related to healthcare, it rapidly becomes a problem with multiple aspects. Regulation for sure: Health Insurance Portability and Accountability Act (HIPAA) in the US require strict compliance in how this data is collected, stored and shared. Ensuring compliance can be a complex and time-consuming process. Data ownership is another one: who owns the data collected by wearables and who has access to it? User trust: if users don’t trust that their data will be kept private and secure, they may be reluctant to use wearables in healthcare. We are seeing a growing use of edge computing principles on wearables. This means providing more computational power to the devices, making them able to perform tasks locally without connecting to remote resources. This would help the device and its acceptance and efficiency in different ways. Faster responses, increased security and privacy, and higher resilience are the main advantages of this approach. For example, the Apple Watch Series 7 has a processor that includes a Neural Engine for running ML tasks providing features like fall detection, irregular heart rhythm notifications and sleep tracking.
9.3
Fact Check 6: Using New Technologies
113
Many severe acute health conditions are highly dependent on timely detection. The earlier these conditions are identified, the better the prospects for survival and successful recovery. Wearables are going to play an increased role in facilitating early detection of these conditions. This is a key area of focus for developers of wearable devices and it is a topic that has been extensively explored in recent academic research. Neuromorphic headsets. The current status of neuromorphic computing, in general, is still in its early stages, with several major companies—including IBM, Intel, NVIDIA and Qualcomm—investing in neuromorphic computing research. There are different elements involved in this technology, from hardware/chips—and this is the focus for companies like Intel, NVIDIA and Qualcomm—to software. So far, the two elements go side by side. The neuromorphic chips are different from the traditional ones. Traditional chips, such as the Central Processing Units (CPUs) and Graphics Processing Units (GPUs), are designed based on Von Neumann architecture, where the memory and processing units are separate. Neuromorphic chips are designed to mimic the architecture of the human brain, with processing and memory integrated together, just like neurons and synapses. This gives them a heavy component of parallelism in their processing, making them particularly suited for machine learning tasks such as pattern recognition. They also have a major advantage to traditional computing: they use a fraction of the power. This is because they do not have to transfer data from memory to processing. They are in line with the idea of Edge-AI, that is decentralizing the AI. Neuromorphic chips aim to also mimic the ability of the human brain to learn and adapt. The novelty of their architecture is generating issues in terms of software. Traditional computers use a sequential programming model where instructions are executed one after the other, while neuromorphic systems function more like a network of neurons, with information processed in a distributed, parallel manner. This requires a shift in the programming paradigm. Vendors are starting to create sort of operating systems, interfacing their neuromorphic chips with existing algorithms and coding programs, such as Python that is currently the most commonly used language for AI/ML. The algorithms we use in traditional computing are not working as-is on neuromorphic. Artificial neural networks/deep learning on the other hand, would work in a sort of natural way, compared to what is happening with the current architectures, where we are modeling the neural frameworks swapping large matrices between memory and processing units. Nevertheless, companies and academies are developing this technology, making it closer to commercial availability. Based on trends in Google search, the topic is picking up in popularity starting from the end of 2021, when Intel announced their new “neuromorphic” chip, the Loihi 2, along with an open-source software framework for the neuromorphic community called Lava. Loihi has more than 1 million artificial neurons and 120 million synapses. As of 2023, Loihi is not a commercial product and it’s only available for research through Intel. How “neuromorphic” is this chip? Mimicking the brain doesn’t mean being just like a brain but using a model of the brain. Artificial Neural Network is an early attempt to do so, but they focused on the software part only,
114
9 The Horizon for AI in Our Society
running on traditional chips. Their learning mechanism is pretty basic, working primarily on the weights of the connections in the network. Loihi uses a biologically inspired mechanism named Spike-Timing-Dependent Plasticity (STDP), which is a mechanism through which connections between neurons (the synapses) are strengthened or weakened based on the precise timing of their activations. BrainChip’s Akida is another neuromorphic chip, with general availability set for the end of 2023. Akida has a similar number of artificial neurons and synapses to Loihi. Akida has been designed for edge computing, meaning it could reside in local devices, such as future wearables. Considering the potential low energy consumption, this seems to be a reasonable target. Most of the existing prototypes on neuromorphic computers are focused on traditional neural network applications, such as image recognition. I’m working with my lab to use them on Large Language Models. All of this is on “generic” neuromorphic computing. We are quite far from the commercial availability of neuromorphic headsets. How could headsets benefit from neuromorphic chips/technology? Neuromorphic technology is best applied to cases that are parallel, streaming/evolving in time, providing adaptive learning and with intense computational needs. Handling sensors, providing real-time feedback, analyzing augmented reality, and eventually leveraging brain-computer interfaces. All of this would fit well with future generations of headsets. On June ’23, Apple introduced their “Apple Vision Pro”. This mixed/ augmented reality device—described by Apple as a “spatial computer”—uses advanced but traditional chips. A future version of this device would benefit a lot from being neuromorphic. It would need less energy, be faster, be connected to more sensors, and provide a higher degree of interaction with the user. The next step would be to make similar devices evolve into cyber-physical systems. Cyber-Physical Systems refer to systems that integrate computation, networking and physical processes. Embedded computers and networks monitor and control the physical processes, with feedback loops where physical processes affect computations and vice versa. Robotic systems, medical monitoring systems, and eventually a future generation of headsets becoming the next “spatial computers”. So far, brain-computer interfaces (BCI) are in a relatively early stage. Neuralink experimented with a solution based on implanting tiny threads into the brain. Apart from people’s acceptance of this type of solution, the results are still at least mixed. There are non-invasive BCI devices, but they have lower resolution than invasive methods. Those non-invasive BCIs use the same approach as EEG (Electroencephalogram) to measure electrical activity in the brain. There are some existing applications, such as Ipsihand by Neurolutions, where an EEG sensor headset detects impulses from the brain and communicates them to a robotic piece. Emotiv, Dreem, Muse, Mendi and NeuroSky have been selling consumers their EEG-based devices for a while. Most/all of them have a phone app for better interactions. They provide different forms of neurofeedback and they have been applied to different scopes such as increasing focus, sports training, meditation, neurostimulation, and sleep improvement. There are also “open” devices designed for
9.3
Fact Check 6: Using New Technologies
115
programmers who want to use the interface to interact with other devices via brain control. Neurocity Crown is a $2000 device with a relatively large community of developers providing vertical applications using the software development kit (SDK) the company provides. A major player in the “open” environment is OpenBCI, providing open-source tools for neuroscience and biosensing, including an SDK for developers. They offer a range of hardware for brain-computer interfacing, including headsets, boards, sensors, and electrodes. Some devices are more on the medical side of the spectrum, even if they still are more on the research side. BrainGate, Natus NeuroWorks and Openwater are some of the players. What most of them have in common is a base of machine learning to detect and interpret the signals. It is reasonable to say that those devices will evolve in time, with the key element being the quality and the right positioning of the sensors. In research and clinical settings, EEGbased BCIs can achieve high levels of accuracy. However, in real-world conditions and especially with consumer-grade devices, the accuracy is still lower in most cases. ML can and will definitely help augment the quality of the signals and interpret the results. Pretty much all the vendors mentioned above are going in that direction. AI-based fitness. As mentioned in a previous section, this is an area where AI will have a major impact. Let’s focus on the user side. Training customization and virtual/augmented reality workouts are the two main segments. Analytics and performance tracking are other segments, but I see them as part of the training customization. The COVID-19 pandemic has accelerated the digital transformation of the fitness industry, with some early applications of AI and for sure paving the way to future more extensive use of AI. Let’s start with online training. There are several applications available for mobile devices claiming the use of AI to customize their workouts. Fitbod is one of the most popular, with more than 5 million downloads. Based on fitness reviewers like Fitness Drum, Fitbod used AI to customize workouts in terms of progression, recovery and programming. Another popular one is FitnessAI, aiming for the same goal of customized workouts. In both cases, the level of customization is not quite the same our Gert is getting. Right now, the customization is more like clustering the users with not quite a granular level of personalization. One of the issues is related to the limited integration with fitness trackers/wearables. While wearables are becoming more and more accurate in monitoring their users, with an increasing number of sensors, the actual use of the input they provide by fitness apps is still limited. Distributed intelligence systems. The one used by Gert is a sort of AI-driven collaboration/workflow management system. This is another area where the COVID-19 pandemic had a significant impact, giving the entire sector a strong acceleration. We all became used to working and collaborating remotely, using platforms like Slack, Teams and Zoom. All of them are now using some degree of AI or at least ML algorithms. Slack uses AI to improve its search functionality, manage permissions and ensure secure communication between different organizations, to improve security by detecting
116
9 The Horizon for AI in Our Society
anomalous behavior and protecting against threats. Zoom’s AI assistant helps users draft emails and chat messages and summarize meetings and chat threads. Zoom IQ for Sales is a conversation intelligence solution analyzing data from sales conversations and provides via an AI engine critical insights. This feature could be used, for example, for AI-powered coaching, identifying areas of improvement for a rep in sales processes. Microsoft Teams is probably the one with the most advertised use of AI. Following Microsoft’s investment in OpenAI, they are percolating OpenAI’s technology into many of their products, including Teams. Using GPT provided by OpenAI, Teams can provide live captions for meetings, generate summaries of meetings, and translate conversations in different languages. GPT is a language model and while it could be a valuable solution for bettering conversation, it would not make a direct difference in collaboration/workflow management.
Application Trends
10.1
10
Method
I would divide the potential/future applications of AI into a few categories: . . . . .
AI for improving global living conditions AI for cost reduction AI for better services AI for services humans may not do The “nice to have” AI.
For all of the above, the key is the combination of more available data, better and more available processing and more powerful high-level “AI-oriented” tools. As mentioned before, there is a historical trend toward tools that are more powerful and generalized. In the early stage of AI, we wrote from scratch the individual algorithms. Today we use libraries with macro functions that we aggregate to develop models. We are starting to have predeveloped models that we can adapt and/or integrate into our cases. Tools like ChatGPT are the first examples of more generalized and powerful tools. Having the possibility to use them for complex and generalized tasks—such as a representation of the “common sense” or a conversational layer to add to our applications—would make our applications way more powerful.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_10
117
118
10.2
10 Application Trends
AI for Improving Global Living Conditions
We still have way too large a share of the population living in plain survival mode. According to the World Bank, as of 2020, 9.3% of the world’s population lived in extreme poverty (Bank, 2022), struggling to meet basic physiological needs, such as food, water and shelter. According to the United Nations, in 2019, about 10% of the global population was undernourished (Nation, 2021), with pretty much the same percentage lacking access to safe drinking water. 10% is around 785 million people. Following the classification from Muneton-Santa et al. in the 2022 paper “Classification of Poverty Condition Using Natural Language Processing” (Orozco-Arroyave, 2022), the above is a measure of monetary poverty. There are non-monetary aspects of poverty. Non-monetary poverty refers to a lack of access to basic goods and services that are necessary for human well-being, such as adequate nutrition, healthcare, education and safe housing. Non-monetary poverty is measured using a multidimensional approach based on different indicators of well-being, such as access to basic services, quality of housing, social support, and personal safety. This approach recognizes that poverty is not “just” an issue of income but is also influenced by a range of social, economic, and cultural factors that affect people’s ability to meet their basic needs and participate fully in society. AI can help policymakers and service providers to better understand the needs of the population in need and develop targeted interventions to address them by discovering patterns of needs and concerns. Analyses can be done at the individual level as well as regional or county level. The author cited above mentioned the use of AI to analyze nighttime satellite images showing the differences in how much nightlight is used in an area and how intense this light is. Relative poverty areas would have less nightlight. This is just one of the possible sources of data. Social is definitely another one. A few years ago, we wrote a paper on the increase in domestic violence and child abuse during the pandemic. The paper, in collaboration with UNICEF, used Twitter and Reddit as sources. A similar approach could be part of a more general analysis of the elements contributing to poverty and or the effects of poverty. Food waste. According to the United Nations, approximately one-third of all food produced globally is either lost or wasted. This is about 1.3 billion tons per year (Programme, 2020). The World Bank estimates that 32 billion cubic meters of treated water are lost globally each year due to leakage (Blogs, 2016). The same World Bank estimates that only a fraction of the over 2 billion tons of solid waste generated worldwide each year is recycled or repurposed.
10.2
AI for Improving Global Living Conditions
119
Causes are primarily in the supply chain. From production—unharvesting crops if market prices are too low—to improper storage, inefficient processing, packaging and transportation, retail overstocking, and consumer overbuying. Addressing pretty much all of those causes requires a redefinition of the physical supply chain, with the addition of new nodes for lap storage, transportation, and redistribution. There are also “soft” changes required: safety and quality control, regulation and policies, collaborations and partnerships. AI can help in most of the steps. Optimize agriculture with a “precision” approach to irrigation, fertilization, and pest control. Optimizing the demand with better forecasting systems for both producers and distributors. Identifying patterns and trends in food waste generation, helping businesses and households better understand their waste behaviors and develop targeted strategies for waste reduction. Optimizing the redistribution of food to food-banks and soup kitchens. Identifying food that can be repurposed as animal feed, composting, or renewable energy via anaerobic digestion. Unemployment is another of today’s real drama for many: the International Labor Organization estimates that around 190 million people were unemployed globally in 2021, with countries like Honduras reaching 50% of labor underutilization (Organization, 2023). Unemployment has a huge impact on individuals and on society. The impact on individuals is quite obvious. The impact on society can be devastating as well. To name some of the consequences of the increased income inequality: increased social tensions, reduced tax revenues, reduced consumer spending, reduced social securing income. I discussed some of the impacts of AI on employment, but here I want to focus on what AI could do for the job market in general, not in a specific area. . Skill development and training with personalized learning systems are able to adapt to individual needs, making it easier for people to gain the skills necessary for employment in a rapidly changing job market. . Job matching: AI can analyze job postings and applicant resumes and online applicant information to identify relevant skills and experiences, making it easier for employers to identify qualified candidates and for job seekers to find suitable positions. . Labor market analysis: AI can analyze labor market data to identify trends, skill gaps, and areas of growth. Policymakers, educators and employers could use them to develop targeted workforce development programs, create more relevant job training initiatives and fine-tune economic development strategies. . During COVID, most of us benefited from remote working. It was a real booster to this way of working. There are still wide margins for improvement that AI could help to fill. Customized and specialized tools such as virtual assistants, collaboration platforms with advanced features such as instant language translation and automated summarization, and project and time management.
120
10.3
10 Application Trends
AI for Cost Reduction
In the constant search for reducing costs to improve margins and/or expand the market, AI will play a significant role in optimizing processes, reducing the cost of labor, optimizing the use of resources. On the processes side, there are relevant margins for improvement to the way we operate. Workflow automation is one. We already have tools automating the flow, but they are working in a static way on predefined flows. An AI-enhanced platform could learn from the behavior of the nodes of the flow (us), take into consideration specific exceptions and adjust the flow accordingly. We already use predictive analytics to better plan scheduled activities. But this is not a yes or no approach. Even in the area where we do use them, those analytics could benefit a lot from injections of AI. Better modeling equipment failure or customer demand. Going into industry-specific applications, the advantages are even more evident. Let me provide three examples: healthcare, human resources and sports. I mentioned the possible applications of AI to healthcare in a previous paragraph, but in preventive healthcare in particular, AI could provide a major bootstrap. By early detection of a medical condition, we could have a dramatic reduction of the cost for society in terms of human lives and economic costs. By using a combination of more available health records and data from wearables, AI could determine patterns for early detection of potential diseases or at-risk conditions. Doctors could be alerted by the AI of the patient’s condition, read the details presented in a conversational way by the AI and act as needed. The doctor could eventually interact with the AI to reach the patient in the most appropriate way, based on the patient’s current location and activity and/or schedule an appointment by checking the agenda of both doctor and patient. A widely available video calling tool could make the appointment easier to set. In human resources, there are two aspects where AI could play a major role: improving productivity and reducing attrition. On productivity, AI could become a 24/7 tutor in a specific field, helping employees do their jobs better. This could also shorten the training period for new recruitments. We are experimenting with a similar solution for our students. On the attrition reduction, AI could detect patterns in employee behavior that are indications of potential issues. Those could be addressed by HR. In sports and team sports in particular, having the best team available at a given moment is key to playing at the best of each game. Most team sports now have a very large amount of data collected in every stage of the players’ training and games. AI will be used to refine the criteria to select players before the season, a sort of advanced “Moneyball” (a 2011 movie based on a 2003 book of the same name by Michael Lewis) powered by better models. Again, keep in mind that “models” are a combination of data and algorithms: more data and better algorithms make better models. During the season, AI will help coaches to predict player performance and reduce injury risk to make better decisions on player management and game strategies. Players are assets for the teams and
10.4
AI for Better Services
121
increasing their values and making them play at their best is increasing the value of the team.
10.4
AI for Better Services
AI can be pervasive in any industry. I mentioned some examples in previous paragraphs. Let me now go to more daily impacts on our lives. Customer support is the first application that can come to mind. We all have bad stories with Interactive Voice Response (IVR) systems. Those systems have been around for a few decades, but they can really—finally!—become not so frustrating with the use of AI. Generic systems like ChatGPT could be “fine-tuned” by providing pairs of questionanswers. The combination of the large amount of data the AI could use for training, the possibility of fine-tuning it and the embedded conversational layer can really make IVR finally successful in interacting with customers. The AI we see today is character-based, with no voice. Going from text to voice is not an issue, with a lot of valuable solutions already available. Voice-to-text can be more tricky due to—for example—to customers’ accents and the context of the word. This is another area where AI will make it work. Travel and hospitality is another sector that can benefit a lot. Travel operators could provide personalized recommendations to their clients, analyzing patterns in their travels, but also other behaviors that can be collected from other sources (privacy permitting). They can also provide virtual concierges, assisting travelers with information, bookings, and personalized suggestions for dining, shopping and local experiences, improving customer satisfaction and reducing the need for human staff. Using AI, travel operators can also analyze market demand, competition and other factors to optimize pricing for their travel services, ensuring the best prices for customers and maximizing their profitability. Retail can also benefit a lot from AI. On top of what is already said for Travel and hospitality—like recommendations, pricing and virtual shopper—virtual search and augmented reality would make a significant impact. AI-powered visual search and augmented reality technologies can help customers find products more easily and visualize how they might look on them or fit in their homes, improving the shopping experience and increasing the likelihood of purchase. The in-store experience is another one. AI could provide real-time information on product availability, personalized offers, or promotions. It could also optimize the store layout by analyzing customer traffic patterns, times spent on each zone and other data to optimize store layouts and product placements, leading to a more enjoyable shopping experience and possibly an increased sale.
122
10.5
10 Application Trends
AI for Services Humans May not Do
This is a wide area where AI-driven systems can make a big difference in both safety and in stretching human capabilities. Let me go through some of the possible applications. Working in unsafe situations. The combination of robotics and AI could open new possibilities for deep space and underwater exploration, for example. Those systems could reach destinations too risky for humans, explore and collect information, setting the bases for commercial settlements. Disaster response and search and rescue is another one. AIdriven robots and drones can be deployed in disaster-stricken areas to locate and rescue survivors, assess the damage, to deliver emergency items and food. This would minimize the risk to human responders. Similarly, handling hazardous material where AI-driven security systems and drones can do the actual transportation and also monitor and patrol high-risk areas, such as military installations, nuclear facilities and borders, providing continuous surveillance and reducing the risks to human personnel. We humans have five senses and structural limitations due to the nature of our body. In our time on this planet, we developed tools and technologies to enhance our capabilities. We can communicate with people across the globe instantly with a device in our hand. We can reach places thousands of miles away in hours or go to the moon. We can build very large and heavy objects in a relatively short time. We can see inside our body and— somehow—across the universe. Still, we cannot directly communicate with people speaking a different language or get a complete sense of what a large group of people may think. We still need to manage our transportation systems in terms of arranging availability, scheduling and providing maintenance, and handle dangerous situations that may occur along the way. Our manufacturing capabilities are still limited by the availability of humans handling the supply chain, managing multiple robots in the manufacturing environments, and controlling the quality. Our vision of the universe could benefit from systems enhancing the images, adapting the optics to the effects of turbulences, and creating predictive models to determine the behavior of astronomical objects. Similarly, medical imaging could be improved by better image reconstruction and enhancement. An equivalent of a predictive model could help humans detect minor changes or patterns indicative of specific diseases or conditions. In each one of those possible enhancements, AI can be the key. Which type of AI, it is difficult to say. A pure Machine Learning one entirely based on data and pattern matching could help, but a domain-specific system with a form of common sense would be way more appropriate to help humans perform specialized tasks.
10.6 The “Nice to Have” AI
10.6
123
The “Nice to Have” AI
While I can see what is “really essential” in our lives, I struggle to define what is “not really essential”. My Dad fought for his life in WW2, and I’m spending days finding a way to contain the cheating in my classes or to include common sense in my NLP systems. Over time, society changed, revising priorities and adapting them to what the environment—in a broad sense—is demanding. Abraham Maslow, in his 1943 paper “A Theory of Human Motivation” (Maslow, 1943) proposed a hierarchy of needs, with each level representing a different category of human needs. From the bottom of the hierarchy upwards, the needs are physiological, safety, love and belonging needs, esteem and self-actualization. A pyramid is the right representation here (see Fig. 10.1).
Fig. 10.1 Hierarchy of needs. Source Wikimedia Commons; Description: Maslow’s hierarchy of needs; Source: https://en.wikiversity.org/wiki/File:Maslows_hierarchy.png; Author: User: Tigeralee; Date: 23 October 2015; This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license
124
10 Application Trends
In his younger years (he passed away at the age of 94), my Dad didn’t have the luxury to move to the top levels, barely touching the “love and belonging” one. While I addressed above the “need to have” part, let me now try to define the current and possible coming areas of the “nice to have” parts AI could help. “Nice to have” needs are things that are not essential for survival but can enhance our quality of life and our place in society and provide additional comfort or convenience. They would be in the top spots of the Maslow pyramid. Entertainment, luxury items, convenience services, high-end electronics, non-essential services like spa treatments, personal assistants, and gourmet food. All of those are things making us enjoy life more and—on the other end—they are a market for those providing those items/services. AI could help on both sides. On the consumer side, AI could power personal shopping assistants to help users find the exact luxury items they’re looking for. Chatbots and virtual assistants can help schedule appointments for non-essential services like spa treatments. AI can also be used to optimize and personalize non-essential services, such as recommending customized menus at high-end restaurants or tailoring workout plans (assuming we consider this as a nice to have). On the providers’ side, AI can help businesses and marketers better understand what consumers want, who are looking for these “nice to have” items and services. By analyzing vast amounts of data about consumer preferences and behavior, AI can provide insights into what luxury goods and services are in demand at a given moment and forecast trends. What features and attributes are most appealing and how to effectively market and sell these products and services to potential customers?
10.7
Case Study 8: Promoting and Supervising AI
The international organizations most equipped to coordinate worldwide efforts on AI joined forces to make this technology the most beneficial possible for society. Under the United Nations umbrella, the Partnership on AI (PAI), International Telecommunication Union (ITU) and Global Partnership on Artificial Intelligence (GPAI) created MAXAI, a worldwide group to maximize the benefits of AI. MAXAI is well funded, thanks to a combination of public and private. The government contributions are managed by the UN and based on the countries’ level of AI. Companies and organizations involved in AI research, development and implementation contribute to MAXAI through donations, sponsorships and in-kind support. In exchange, they benefit from access to research, resources and collaborations facilitated by the organization. MAXAI also receives donations from individuals interested in supporting the organization’s mission. A good portion of the individual donations are from crowdfunding, with individuals funding specific initiatives and receiving customized versions of the funded project. The crowdfunding is managed by an AI. It handles the campaign creation and optimization, the personalized outreach—analyzing donors’ interests, social
10.7
Case Study 8: Promoting and Supervising AI
125
media profiles and previous donation history—promotion, feedback—adjusting the project goals in real-time—fraud detection/prevention. It also has a chatbot available to the donors, answering questions and providing customized updates. MAXAI is also receiving research grants from national and international agencies to fund specific projects. Very large projects with worldwide impact are financed by the World Bank. The main goal of MAXAI is to provide individuals, companies and research institutions with an alternative to the technology oligopoly that was in place. Another major issue MAXAI is addressing is the bias in the AI models. They developed a system to determine the provenance of the data and to let users know upfront the type of bias the data could have. They also provide an NLP-based solution to analyze the data used to train a system and extract its “cultural DNA” to assess and prevent bias. This is done by evaluating the underlying patterns, assumptions and values that are implicitly encoded in the data. MAXAI services are along four lines: . Technology. They develop AI components—data and algorithms—and make them available for a fee that varies depending on the user. This is a sort of open-source approach, but it is managed by an AI that is able to trace the users and update the components they acquired as well as recommend other components that could make more valuable the ones they already have. The system has a chatbot helping users determine the exact components they need and provides tutorship for the use of it. The system uses an advanced form of blockchain that is able to trace the components in their lifetime and recognize compensations to the contributors based on the value of their contributions. The value is calculated by an AI that uses a combination of historical data and a semantic evaluator of the relevance/disruptiveness of the contribution. This blockchain system makes MAXAI a platform that some developers use as their marketplace to sell their components. . Computing resources. Developing large models requires so many resources that, in the past, only a few companies could afford it. MAXAI provides those resources using a chatbot that is able to calculate the cost from the description of the problem provided by the user. MAXAI uses a combination of its own resources and distributed computing, where the nodes of the distributed system are a combination of institutions and individuals. The P&L of the system is managed by the same AI-supported blockchain system, implementing a combination of power bartering and actual money exchange. When no bartering option is available, the cost is calculated by an AI that considers who the user is—nonprofit has the lowest rate/nominal fees—what is the goal of the development, with the possibility to pay using credits from future revenues. Those are calculated with a mechanism similar to factoring, with the risk determined by an AI that analyzes historical data on the type of application, the user, and the overall market conditions. . Policy recommendations and guidelines: MAXAI develops policy recommendations, guidelines and best practices for responsible AI development and deployment. These
126
10 Application Trends
resources help governments and organizations in creating AI policies and regulations that align with international norms and standards. MAXAI uses an AI that started with a corpus of policies developed in the past, monitored the evolution of the use of AI, created possible scenarios, revised them using a chatbot with human subject matter experts and refined the recommendations/guidelines. . Support for startups, entrepreneurs and researchers. MAXAI provides resources to support new initiatives. Support includes education and training, technology components and computing resources, collaboration and networking, tutorship, and funding opportunities. The support is managed by an AI interacting with the user determining their needs and assessing the risk in providing the different levels of support. Recently, MAXAI started a service to support users with acquiring resources—both human and non. The service is supported by an AI handling group buying, resource sharing, optimal provider search and contracting. MAXAI is also playing an institutional role along the following lines: . Guiding the development and implementation of AI policies and guidelines that promote responsible AI practices. . Monitoring and evaluating progress and impact of AI initiatives. . Supporting international organizations, governments and civil society groups to promote global collaboration on AI policy. . Advising MAXAI’s stakeholders on emerging AI technologies, trends, and potential risks, providing strategic guidance. They recently started working on a certification process for coming AIs that could guarantee that a “MAXAI certified” AI has been developed according to the ethical, social and legal requirements defined by the MAXAI’s stakeholders.
10.8
Fact Check 8: Promoting and Supervising AI
As mentioned in a previous paragraph, there are international organizations in the direction of overseeing AI. In the USA, the National Artificial Intelligence Research and Development Strategic Plan was developed by the White House Office of Science and Technology Policy and the National Science and Technology Council. The AI for American Act was introduced in Congress in 2019. The National Institute of Standards and Technology (NIST) developed a series of guidelines for a responsible use of AI. The discussion about regulating AI escalated after the introduction of ChatGPT by OpenAI, culminating with a one-sentence statement released by the Center for AI Safety, a nonprofit organization stating, “Mitigating the risk of extinction from AI should be a
10.8
Fact Check 8: Promoting and Supervising AI
127
global priority alongside other societal-scale risks, such as pandemics and nuclear war”. The statement was signed by more than 350 executives, researchers and engineers working in AI. As previously discussed, ChatGPT-alikes are far from being a potential threat to humanity and many—including myself—do not believe that a generalized AI is on the horizon. Some people believe that AI could become powerful enough to create societalscale disruptions within a few years if nothing is done to slow it down. Right after that statement, more than 1,000 technologists and researchers signed another document calling for a six-month pause on the development of larger AI models, citing concerns about “an out-of-control race to develop and deploy ever more powerful digital minds”. Besides the alleged risks from AI, there are real issues such as bias, the formation of technology oligopolies, and privacy that would need some form of regulation. Industry wants to maximize their profits; politicians have political agendas, countries may want to get geopolitical leverage, and individuals have their own interests. To make all of them respectful of all the others’ needs and interests, technologies like AI should be regulated. This is in theory. There are several elements impacting social life in the world that would benefit from global regulation. Climate change, human rights, finance, cybersecurity, and disarmament, to name some. Lack of global consensus, national interests, limited enforcement, and limited international cooperation: are some of the reasons why we are failing to achieve goals that would make the lives of a large part of the worldwide population much better. These same reasons pretty much make the regulation of AI problematic. The closest approach to international regulation happening so far is the European Union Artificial Intelligence Act (AIA) which would regulate the development and use of AI in the European Union. The AIA would classify AI systems according to a perceived level of risk, from low to unacceptable. The regulation would set requirements for highrisk AI systems, such as requiring risk assessment, the use of high-quality data sets and the release of information to be provided to users. Examples of unacceptable risks would be cognitive behavioral manipulation of people or specific vulnerable groups—for example, voice-activated toys that encourage dangerous behavior in children—and social scoring, like classifying people based on behavior, socioeconomic status, or personal characteristics. They proposed limitations for large language models—like ChatGPT. In particular, disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training. OpenAI is questioning the restrictions. AIA was proposed by the European Commission in April 2021. The regulation is still under negotiation between the three branches of the European Union—the European Commission, the Council and the Parliament. Even if it will be approved, would it be enforceable? What if an unacceptable AI will be developed in a country with no agreement with the EU? To have a term of comparison, UNICEF estimates that around 152 million children are engaged in child labor globally. According to the United Nations, in the world, 1 in 3 women has experienced physical or sexual violence in their lifetime. According to the Uppsala Conflict Data Program, there
128
10 Application Trends
were about 50 active armed conflicts worldwide in 2020, resulting in significant human rights abuses and violations. The wait for MAXAI may be long.
Social Trends
11.1
11
Overview
Social trends play a critical role in influencing the decisions that individuals, organizations, and governments make. Social trends are the changes that occur in the attitudes, beliefs, behaviors, and values of people in a society. These trends are driven by various factors such as economic, political, technological, and cultural developments, and they can have a profound impact on our daily lives. The increasing role of technology in our lives is one of the most significant social trends. Technology has been crucial for the society for centuries. Humans have been using technology to make their lives easier and more efficient since prehistoric times when early humans used tools made from stones, bones, and wood for hunting and gathering food. The invention of the wheel and the use of fire also revolutionized the way humans lived and interacted with their environment. The development of agriculture and the use of domesticated animals also represented significant technological advancements that allowed humans to settle in one place and build civilizations. Technological advancements have continued to shape and transform society, from the invention of the printing press and the steam engine during the Industrial Revolution to the development of computers and the internet in more recent times. Technology has had a profound impact on nearly every aspect of human life, including communication, transportation, healthcare, education and entertainment. Technology has become increasingly important in recent years due to several factors. The world is now more interconnected than ever before, with people, businesses and governments all relying on technology to communicate and conduct business across borders. This has led to a need for more efficient and effective ways to process and share information, as well as to develop new technologies to solve problems and drive innovation.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_11
129
130
11 Social Trends
Let me recall some of the social trends that are shaping our world.
11.2
Aging Population
Description: As people live longer, the proportion of elderly individuals in the population is increasing. It is a result of improvements in healthcare and living conditions, as well as declining birth rates. As the elderly population grows, there are fewer young people to support retirement programs through taxes and contributions. This can lead to budget shortfalls and the need for policy changes. This trend is having significant impacts on healthcare, retirement planning, and social programs. AI impact: This trend can have a high correlation with AI, which can be a way for society to better meet the needs of an aging population while also improving the overall quality of life for seniors.
11.3
Polarization
Description: Polarization is considered one of the major social trends in recent years. Polarization refers to the increasing division of society into distinct and often opposing groups, often along political or ideological lines. In the United States and many other countries, political polarization has been on the rise, with people becoming increasingly divided over issues such as race, immigration, and social values. This has been fueled by a number of factors, including the rise of social media, the decline of traditional media, and the increasing complexity of modern life. Polarization has a number of negative consequences for society, including the erosion of trust in institutions and the breakdown of social cohesion. It can also make it more difficult for governments to address important issues and solve problems as people become more entrenched in their positions and less willing to compromise. AI impact: Difficult to say if AI can have a more positive or negative impact on polarization. Algorithms can reinforce bias and discrimination and echo chambers can be created on social media platforms, leading to more extreme views and less exposure to diverse perspectives. AI can also be used to target and manipulate individuals and groups for political or commercial gain, furthering polarization. On the other hand, it can be used to analyze and better understand the underlying social and economic factors that contribute to polarization. It can also be used to facilitate communication and understanding between different groups by providing new channels for dialogue and information sharing.
11.5
11.4
Urbanization
131
Income Inequality
Description: The gap between the rich and poor is widening in many countries, creating social and economic tensions. The top 1% of earners in the United States now earn a larger share of the country’s income than at any time since the 1920s (WIkipedia, 2023), and the gap between the rich and poor has been widening. Societies with higher levels of income inequality tend to have higher rates of social problems, including higher rates of crime and lower levels of social mobility. Income inequality can also lead to political instability and undermine the legitimacy of democratic institutions. AI impact: AI can have both a positive and negative impact on this social issue. On one hand, AI could increase income inequality by replacing low-skilled jobs and concentrating wealth in the hands of a small group of companies and individuals who develop and own the technology. This is what we saw in the past when potentially disruptive technologies came to play. The Industrial Revolution with steam power and new machines, and the digital revolution with digital devices. The automation of tasks that were previously performed by humans could lead to job displacement and unemployment, as we discussed in previous chapters. This could worsen income inequality by reducing the number of available jobs and lowering wages for those who are still employed. It is likely that this may impact lower-level or more repetitive jobs that are often associated with lower-income individuals, who may not have the resources to change their skill set. On the other hand, AI could help reduce income inequality by improving access to education, healthcare, and other essential services. For example, AI could be used to develop personalized learning platforms that help students from disadvantaged backgrounds catch up to their peers or to develop diagnostic tools that help doctors provide more accurate and efficient care, potentially expanding it to currently underserved segments of the population. AI could also help reduce bias and discrimination in hiring, lending, and other areas, which could help reduce the marginalization of currently underrepresented groups.
11.5
Urbanization
Description: More and more people are moving to cities, creating crowded and diverse urban environments. This trend is driving changes in transportation, housing, and social dynamics. The COVID pandemic had an impact on this trend, with people migrating away from large cities to reduce contagion risk and/or to stay in a different, sometimes more relaxing environment. The pandemic certainly had an impact on patterns of urbanization, but it is too early to say whether this represents a long-term trend or a temporary blip. AI impact: Urbanization may not have a direct correlation with AI, but the concentration of people and resources will facilitate the growth and adoption of AI, increasing the divide between non-urban populations, potentially widening income inequality and increasing polarization.
132
11.6
11 Social Trends
Demographic Shifts
Description: The demographics of many countries are changing, with increasing diversity in terms of race, ethnicity, and religion. This trend is raising questions about identity, tolerance, and social cohesion. This trend is on the news every day in Europe, with the large migration from Northern regions of Africa, as well as from the war-affected Ukraine. If properly managed, this may counterbalance the reduction of the active workforce due to population aging. AI impact: While this change may not have a direct correlation with AI, AI can help organizations and governments better understand and serve these diverse populations through language translation, sentiment analysis, and cultural understanding. It can also help to better deal with a changing workforce demographic via customized training programs and tutorships.
11.7
Globalization
Description: The world is increasingly interconnected, with more trade, migration, and cultural exchange. This trend is creating both opportunities and challenges for businesses, governments, and individuals. Whether this trend is going to be so relevant in the near future is not clear. Raising nationalism and protectionism, the concerns coming from a possibly overapplied just-in-time global supply chain may lead to a rethink of the whole approach to globalization. The COVID-19 pandemic exposed the risk of having a single main location for most of the elements of the supply chain for a large section of our products. AI impact: A more distributed supply chain would make it more resilient but more complex to manage and this is where AI could play a relevant role in managing those more complex systems, considering a very large amount of data and possible scenarios.
11.8
Collaboration
Description: Powered by increasingly powerful technology, people have more opportunities to collaborate than ever before. Collaboration can take many forms, from informal networks of like-minded individuals to formal partnerships between organizations. Some examples of collaborative efforts include open-source software development, crowdsourced funding for creative projects and collaborative research initiatives. People may want to spend more time contributing to shared initiatives than streaming TV or playing games. AI impact: AI can be deeply involved in collaboration. On one hand, AI could benefit from it in terms of expanding development. The increasing capabilities of personal
11.9
Personalization
133
devices could boost the diffusion of the shared computational power of metrics collected by individual devices. While forms of distributed computing such as volunteer computing were more popular in the past—with samples like SETI@home and Folding@Home— they can become popular again due to the high costs of developing AI systems, currently leading to a concentration of developments into a very limited number of actors. Sharing resources could give access to development to individuals and organizations currently unable to play a role for a lack of resources. Resources in AI can also be data. Pooling data from different organizations and individuals could create larger and more diverse datasets, which can help improve the accuracy and effectiveness of AI algorithms. This may be considered a not-so-relevant element, but it could be a way to reduce the bias that is affecting the majority of large AI models. On the other hand, AI could help provide a better way to communicate and collaborate across languages and cultures. It could also provide personalized support to the virtual team members.
11.9
Personalization
Description: With advances in technology and data analytics, people are increasingly seeking out personalized products and experiences. This can also be read as an increasing tendency of people to think in terms of "all about me". Individualism is a cultural shift in which individuals prioritize their personal needs and desires over the needs and desires of others or the collective group. It can also lead to a sense of isolation and disconnection from others. Ai impact: AI can play different roles. On one hand can serve individualism, providing more and more customized products and services. On the other hand, it could help social platforms to drive individualists into more manageable clusters of individuals. This is what some of them are already doing, driving the polarization we are seeing as a growing trend in society. On the same side, but with a positive connotation, it can help individuals reach out to like-minded people with a sort of peer-to-peer approach. Yes, it would be a sort of bubble/echo chamber again, but without big brothers/big sisters orchestrating it.
11.9.1 Political Correctness Description: It refers to the use of language, policies, and behavior that aim to promote inclusivity, sensitivity, and tolerance toward historically marginalized groups. It has been a controversial issue in recent times, with some people seeing it as a necessary step toward creating a more just and equitable society, while others view it as a form of censorship and an infringement on free speech. This is one of the topics fueling polarization.
134
11 Social Trends
AI impact: AI can be used to recognize language that is not politically correct and eventually flag it out. On the other end, AI can have bias amplifying elements nonpolitically correct. For example, if an AI system is trained on data that reflects existing stereotypes, it may produce biased/stereotypical outcomes. This could have negative implications for areas such as hiring and recruitment, where biased AI systems could perpetuate discrimination.
Conclusions
12.1
12
Overview
Source Base: Wikimedia Commons; Description: NAO robot, hand waving gesture; Source Own work; Author: Anonimski; Date: 22 May 2014, 14:28:06; the file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication Edited using Picsart
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9_12
135
136
12 Conclusions
AI is here to stay. It is the natural extrapolation of all the digitalization and computing we have seen for 50+ years. It is unlikely that it will be a “revolution” we can put a date on. Many sectors will benefit in an incremental way from the technology. AI is not only about technology: it has implications running deep in our society. We are talking about systems driven by knowledge. Different knowledge would drive different behaviors. The way we understand the limitations of this knowledge will be essential for an organic use of the technology. What we know for sure is that people easily trade convenience for many of their assets. Money, privacy, personal growth. AI can make our lives easier. Why read the news when you have a two sentence summary read by an AI-based chatbot? But what are the criteria for the summary? How biased is the summary? Which type of bias is there? Who defines the bias in my summary? Of the estimated 45 TB of text used to train ChatGPT, more than 90% is in the English language. Yes, English is widely used on open sources, but according to Ethnologue, a database of world languages, only about 380 million out of 8 billion—that is the world’s population—speaks English as their native language (Ethnologue, 2023). That means the vast majority of the population is not really represented in one of the current largest “AI” language models. Language is not “just” language. It is a form of representation of concepts. Wikipedia published a list of words appearing in the main dictionaries for different languages. Korean has more than 1M words, Japanese 500K, Italian 260K, English 170K. Each word represents a concept. Words are assigned meaning through cultural and societal agreements, and they serve as a way for people to communicate and convey complex thoughts, emotions and experiences. The lack of words to represent a concept can make it difficult to represent that concept accurately and effectively. For example, if there is no word in a language to describe a particular feeling or experience, it becomes challenging for people to communicate and understand that experience with others. This can lead to misunderstandings, confusion and a general lack of shared understanding. The lack of words can also limit the ability to think and reason about certain concepts. Words serve as mental shortcuts that help us categorize and fully understand information. The bottom line, with models like the current GPTs, we are missing a significant part of knowledge embedded in non-English language sources. Filling the gap is not easy, though. How do you put together different languages? Translating all the languages in a single one could be a possibility, but—again—some information can be missed in the translation. It would also require a machine-based system: if we want to have equal representativeness, there would be many millions of documents to translate. Translation algorithms are improving, but they still lack accuracy when the goal is to cover a large number of languages. Languages are formal representations of concepts, and a symbolic representation of language could be independent of any specific language. The studies of formal languages started in the late 19th and early twentieth centuries. The concept of a formal language
12.2
It is not Only About Technology
137
as a set of symbols and rules for combining them was introduced by Giuseppe Peano and further developed by David Hilbert and others. Over the years, a lot has been done, but not much after the explosion of Machine Learning. As mentioned in previous chapters, the symbolic and the data-driven approaches are quite separate today. The convergence of those two sides of the language—and knowledge—representation is essential if we want to develop AIs that can be used organically in our society with the proper industrial strength.
12.2
It is not Only About Technology
AI has the potential to have a major impact on our society. There are ethical, political, and legal variables in the equation determining the future of this technology. Based on how we use them, AI can have completely different shapes. We can discuss the direction it is taking, but technology is moving fast. The non-technology elements that can do the make-it-or-break-it for AI seem to be much slower. This may lead to dysfunctional deployments of the technology, potentially creating more problems than solving them. This is true for any technology that has been developed so far. The printing press, for example. It played a crucial role in disseminating information and knowledge, making it more accessible to a wider audience. On the bad contribution to the social side, it was used to disseminate misinformation, fueling conflicts and divisions. We could say the same for the web, television, or in part, social media. The more complex a technology is, the more people will focus on the outer layer of it, what seems to be, what other people or the media is saying, without asking questions about what is inside, being that too complex for the majority. Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic”. This is what some people thought about the first computer applications and this—to a certain degree—is what some people are thinking about AI. Wikipedia lists sci-fi movies that included artificial intelligence either as a protagonist or as an essential part of it. From this list, there were 12 movies in the 40 years period between 1934–1974, 84 in 1975–2015, and 60 in the following seven years to 2023, basically from 0.3 movies/year to the current almost 9 (Wikipedia, List of artificial intelligence films, 2023). ChatGPT reached 1 million users in just five days after it was launched in November 2022. Based on Google Trends, the term “Artificial Intelligence” reached the highest number of searches ever in May 2023, both in the US and worldwide. This is to say the hype is at a historically high level. Expectations are at their peak. The “intelligence” in the name may make a large number of people think that we can get a (non-humanoid) version of Ava in Ex Machina. Ava passed the Touring test, meaning it (she?) had full human-like cognitive capabilities. As mentioned a few times in the above chapters, we are far from that. Even if we are working hard on this, we do not yet have a reasonable representation of “intelligence,” and the current leading model is
138
12 Conclusions
based on an advanced correlation approach. The system couples our queries with the best matching elements it has in its “memory”. It is not generating new knowledge but generating answers or scenarios created by elements that appeared in the past and are in its training dataset. For some tasks, this could be plenty. Tasks that are “standard” are pretty much the same as they have been in the past. They can be automated using the large language models approach. When there is to deal with exceptions that cannot be handled by statistically determined “best effort” but require an “intelligent” approach, then, well, we need “intelligence”, which is a symbolic representation of what a human expert in that field would do. We are not here yet. The “faith” people may have in those systems may create real problems, with decisions and actions determined by an agent—the “bot”—intrinsically incapable of doing it. We do not drive a car thinking it could fly. Actually, before using it, we learn to use it. You do not use your phone to get a coffee. Here things are more complicated. There is no instruction manual for ChatGPT and even if there was one (and in a sense, there is), people would just use the bot, getting the wow effect and building up the expectations. In Academia, we are teaching how to use those tools, describing advantages and limitations, but we are reaching a small portion of the population. People will take decisions based on those tools and we are starting to read about cases of misuse. AI can be a revolution or “just” an important step in the natural evolution of tools humanity is using. Time will tell. Looking back to past technologies, humanity added complexity at each step. At the basic communication level, we have technologies like printing, telecommunication, radio, television, and internet. On the computational level, paper & pencil, slide rule, mechanical calculators, electronic calculators, and computers. On automation, manual, mechanical devices, electromechanical control systems, computerized automation, and robotics. AI could be an enhancer for all levels, making all of them easier to use and more powerful, addressing some level of intellectual needs. It could disrupt lower-level businesses, those that are more repetitive and more predictable in terms of outcome. What we know is that AI is not just about technology. There are several scenarios for the future based on answers to questions like. What is going to be the model for driving the innovation? Many of the key components of the current leading systems are based on the generous contributions of companies like Google, making models like Transformers (the “T” in GPT) freely available to the community. But then companies like OpenAI and Microsoft built commercial solutions on top of it. This “open” approach will not continue, and companies will build paywalls around their solutions. Considering the investments to develop the technology are relevant, Academia may not be the leader. The government could do it. They did it for the aerospace industry, for example, due to its strategic importance and the high costs of aerospace research, development and production. AI may be a similar case. An industry-driven AI would be quite different from a government-supported AI. In the first case, there will be an oligopoly, with few managing what “intelligence”
12.3
Now What
139
people will use. Difficult to think about something more controlling than that. The behavior in potentially key decisions across the world is defined by a few commercial models sold as black boxes, with the developer deciding upfront the decision processes. There could be regulating bodies, but, as we discussed in a previous chapter, regulations make sense if enforceable and those may not be due to the complexity of making the rules actionable worldwide. Then there are the “non-technical” aspects of AI: what could be a better model to represent knowledge? Knowledge graphs, neuromorphic computing, and the next “room theory” (my framework to represent knowledge). No matter what the computational actuator would be, we need a better model for the knowledge, with the current advanced statistical one not being an option.
12.3
Now What
Given all the above, how can “I” benefit the most from this “revolution”? Let’s see the options for the different “I”. For all the different “I”, there is a common denominator: if you want to take advantage of AI tools, you need to understand—to the best of your technical knowledge—what those tools can do and what they cannot. Blind use of the tools is a recipe for failure. Let’s analyze some of the “I”.
12.3.1 I’m a Student The short story is AI can help you be a better student. Or a worse one. Like we are all using Google—or equivalent—for our searches, we will use AI as a support for what we do. I’m using it in the process of writing this book. It is my sidekick, helping me pass the writer’s block, start a background search, and be a sort of “super Google”. Then I drill down using the actual Google for more and I then write the story and the context. This is what a student could do. It is—and it will be more and more—a way to expand the possibilities we have as a student, researchers, or writers. It could be bad if you use it sort of verbatim. There are a very large number of cases where the tool is either wrong, inaccurate, or very limited in the answer. If you just copy and paste, you are not learning. Yes, you are somehow solving the problem, but you don’t know how. You will never know if what the tool is telling you is right or wrong. This is particularly true for coding. Most of the easy coding is OK from the tool—as it is now. For more complex code, the tool generates incorrect results in a significant number of cases. If you know how to code, you can take the parts that are good and disregard the wrong. If you don’t, you fail. If you are a student learning how to code and you use the tool from the very beginning, you
140
12 Conclusions
will never know if the code generated for a more complex problem is right or not. And you fail.
12.3.2 I’m a Content Creator The category of content creators evolved dramatically in the last several years. In the printing time, it was creating content to be printed on books, newspapers, and magazines. Content creators were writers. During broadcasting TV time, it was all about creating content for a mass audience: radio hosts, TV shows producers and filmmakers. In the Internet-digital time there were—and in part still are—websites, blogs, social media, and online videos. This was/is a democratization of content deployment, paving the way to the rise of influencers, creating content driven by the feedback of their followers. Video games and, in general, digital interactive entertainment are a large and growing market, stepping into traditional filmmaking. More than 3 billion people played games in 2022, spending a combined total of almost $200 billion (Newzoo—2022 Global Games Market Report (Newzoo, 2022)). Then there is the market for virtual reality (VR) and augmented reality (AR). VR has a 2025 projected global value of about $21 billion (Markets and Market 2023). The global AR market size was valued at more than $38 billion in 2022 and is expected to grow by about 40% each year from 2023 to 2030, reaching more than $595 billion in 2030 (Grand View Research, 2023 (Grand View Research, 2023)). In video gaming—with or without AR/VR—the big names will continue having a leading role, in particular for high-budget games with high-end interactivity (including virtual or augmented reality). But there is a growing market for independent developers, mostly using phones/apps as platforms. Indie games went from 13% of the market in 2021 to 17% in 2022 (YouGov (YouGov, 2022)). Still, the cost of the development can be relatively high. AI tools can help reduce costs, increasing the offer by automating tasks, detailing characters’ behavior, and optimizing performance. Moving out of the indie developers, AI tools can help the industry a lot, generating content, making characters “learn” and improving over time, making possible natural language interaction, and giving characters with sort of computer vision to analyze in real time situations in subjective mode. All of those are somehow already available, but quality and availability are going to grow by much. If you are an influencer, AI tools could be a good sidekick. We are already seeing some content generated by large language models, but the outcome is still basic and too one-size-fits-all. The coming generations of AI tools will help curate content, providing more customization. They can assist in making better use of the metrics to understand the audience and to track performance and then propose ways to improve them. AI tools can analyze large amounts of data—such as user preferences, browsing history, demographics and behavior patterns—to provide customized content recommendations to individual users, hopefully keeping a good balance between personalization and privacy.
12.3
Now What
141
12.3.3 I’m a Professional It obviously depends on what your profession is, but AI tools can give a bootstrap to your activities. We are still in an early stage with tools like ChatGPT, with a too-basic representation of knowledge and no domain-specific data/“knowledge”. But within a few generations of tools—that at the current rate of growth of the technology could be 1– 2 years—they can potentially be a replacement for most of the lower-level tasks. In the legal domain, research, document analysis, compliance monitoring, sentencing and parole recommendation. All the tasks that are more “standard” could be automated, leaving legal professionals to concentrate more on the strategy of the case, running different scenarios in a short time, and expanding the search. Pretty much the same for finance. Risk assessment and management, basic trading, and compliance monitoring. In mergers & acquisitions, it could provide support in deal sourcing, due diligence, valuation, and post-merger scenario analysis. The tool may not provide an unsupervised output but will run scenarios for the professional to work on. The professional—again—can then focus on higher-level tasks, on integrating the scenarios proposed by the tool, and proposing solutions with a higher level of complexity and value for the client.
12.3.4 I’m an Executive It obviously depends on the industry and the size of the company, but the uses of AI tools could fall under categories such as specific virtual assistants, data analysis, personalized news and information, and decision support systems. The more critical and domain-specific the needs are, the more the tool should be domain “cognizant”. The current tools are not, but it is reasonable to think they will be. My lab is working on that, as well as many other labs in the world. For example, personalized news and information could be great for an executive, but the output has to be specific to the industry, the market segments, the topics of most interest at a given time, and the strategy/preferences of the executive. We developed a prototype of something similar, but it would require a nonacademic environment to be a product. But there will be a tool doing this job soon, at a growing level of accuracy.
12.3.5 I’m an Investor I’m no expert in investing and emerging technologies can be volatile, with fast growths and sometimes faster demises. Apart from companies directly delivering tools or solutions based on AI, there are companies that are an essential part of the AI pipeline. NVIDIA grew a lot after the announcements on ChatGPT because the growth of large language models, as we do now, requires a lot of the type of parallel computing that the NVIDIA
142
12 Conclusions
chips are the market leader for. I’m in no position to recommend companies to invest in; just saying that those companies fueling the AI pipeline will see growth. On the other end, companies based on lower-level intellectual human jobs may be negatively impacted. This is unless those companies are functional and essential for the AI pipeline, like for data labeling/annotation. But even they are at risk of being replaced. We are experimenting with the use of symbolic knowledge—such as taxonomies or knowledge graphs—as a base to train AI tools.
12.3.6 I’m a Consumer Overall, AI tools will make consumer’s lives easier, with some nightmares. Easier, thanks to more customized shopping assistants, personalized recommendations, and more customized products and services. Companies like Walmart and CALA are already working on using ML-powered augmented reality for clothing virtually. Nightmares are from the usual issues discussed in previous chapters. Privacy is one: AI tools will collect “better” our data and the “enriched” data can be misused and/or be subject to data breaches, leading to privacy violations or identity theft. AI tools have the capability to generate precise and detailed profiles by analyzing vast amounts of data. These profiles can be leveraged to infer additional information beyond what was originally provided, leading to the creation of fictitious scenarios that have intrinsic risks. AI algorithms can draw connections and make predictions based on patterns and correlations found in the data. This means that even with limited information, AI tools can potentially fill in the gaps and generate highly plausible but inaccurate narratives. Narratives that can be used to fabricate situations to mislead individuals, organizations, or automated systems. Once more, it would be great to have international regulations for AI, but I expressed my skepticism about this actually happening. There will be regulations, pretty much like the privacy regulation from the European Union known as the General Data Protection Regulation (GDPR), but there will always be workarounds and lobbyists making sure to tone down the enforcement or create loopholes. On the GDPR, we saw several workarounds. Consent manipulation, for example. Fake data anonymization, with an AI tool recreating the full pictures. Data delocalization, where companies store data in countries with more relaxed policies. Thirdparty data sharing, with companies working with partners to get data without the need to ask for consent.
Closing Remarks
In the journey through the world of AI, we have explored its advancements, its transformative impact on various industries and the societal implications it brings. From healthcare to transportation, finance to education, AI is demonstrating its potential to have a significant impact on the way we live, work and interact with the world around us. While AI presents incredible opportunities, it also poses significant challenges. The ethical considerations surrounding AI’s deployment, the potential job displacement, the need for regulation and accountability and the importance of addressing biases and privacy concerns are all critical areas that require a great deal of attention. As we move forward, it is crucial to strike a balance between innovation and responsibility. Collaboration between governments, industries and society is essential to maximize the benefits of AI for the collective good. Ensuring that AI systems are transparent, explainable and accountable will foster trust and confidence in their use, creating a virtuous circle for the improvement of the technology and its use. The future of AI holds great promise, but it is up to us to shape it responsibly. By fostering a culture of ethics, inclusivity, and continuous learning, we can steer AI toward a future that benefits humanity as a whole. In this rapidly evolving landscape, the future is promising, and the possibilities are numerous. We can navigate the challenges, seize the opportunities and unlock the full potential of AI to create a better and more prosperous world for all.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9
143
Notes on the Images
Adding the images has been a major task, taking me a significant portion of the overall time to complete the book. I initially wanted to use AI-generated images, but the copyright on this type of content is still not quite defined. After talking with the editor, we agreed to use traditional right-free images. Having AI in my images had a semantic meaning for this book, though, and I didn’t want to lose it. As a result, I used right-free images, but I edited them using a tool from the latest generation of image editors, Picsart, with AI capability to modify pictures. The images in this book have semantic roles, visually representing the content of the chapter they are in. Getting semantic-specific right-free images can be challenging. I primarily used Wikimedia Commons for the base images, but they have an embedded meaning. In particular: . The retro robot in the first image is the one from the movie “Metropolis”, a 1927 iconic sci-fi movie from Fritz Lang. . The Rodin thinker in waves of data is representing the human puzzled by the massive amount of data we are in. . The benevolent AI “Adrian” has the face of the 14th Roman emperor, Hadrian, who is considered one if not the most positively influential emperor in ancient Roman history (yes, I was born in Rome Italy …). . The malevolent AI “Kal” has the face of the third Roman emperor, Caligula, who is considered the worst emperor in ancient Roman history. . The professional working in technology “Tom” has the face of Thomas Edison, who was one of the best-known and prolific inventors from the New York area. . The STEM professor “Al” has the face of Alan Turing, who was a British mathematician and computer scientist often considered as one of the fathers of Artificial Intelligence.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9
145
146
Notes on the Images
. The data scientist “Gert” has the face of Gertrude Blanch, who was an American mathematician who did pioneering work in numerical analysis and computation and was a leader of the Mathematical Tables Project in New York. Underneath each picture I placed the source I used for it.
Bibliography
Administration, U. E. (2022, October). How much electricity does an American home use? Retrieved from https://www.eia.gov/tools/faqs/faq.php?id=97&t=3 Affairs, U. -D. (2018, May). 68% of the world population projected to live in urban areas by 2050. Retrieved from https://www.un.org/development/desa/en/news/population/2018-revisionof-world-urbanization-prospects.html Bank, T. W. (2022, November). Poverty. Retrieved from https://www.worldbank.org/en/topic/pov erty/overview Blogs, W. B. (2016, August). What is non-revenue water? Retrieved from https://blogs.worldbank. org/water/what-non-revenue-water-how-can-we-reduce-it-better-water-service Brzezinski, M. M. (2022). Sharing the gains of transition: Evaluating changes in income inequality and redistribution in Poland using combined survey and tax return data. European Journal of Political Economy, 73, 1–14. Cao, L. (2021). AI in finance: Challenges, techniques and opportunities. arXiv:2107.09051 Center, P. R. (2023, February). 60% of Americans would be uncomfortable with provider relying on AI in their own health care. Retrieved from https://www.pewresearch.org/science/2023/02/22/60of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/ Center, P. R. (2023, February). Public awareness of artificial intelligence in everyday activities. Retrieved from https://www.pewresearch.org/science/2023/02/15/public-awareness-of-artificialintelligence-in-everyday-activities/ Channel, H. (2018, August). The world’s first web site. Retrieved from https://www.history.com/ news/the-worlds-first-web-site Channel, H. (2019, October). Who invented the internet? Retrieved from https://www.history.com/ news/who-invented-the-internet CNN. (2023, January). Buzzfeed says it will use AI. Retrieved from https://www.cnn.com/2023/ 01/26/media/buzzfeed-ai-content-creation/index.html#:~:text=Peretti%20elaborated%20that% 20the%20technology,stories%2C%20a%20spokesperson%20told%20CNN. Consulting, A. R. (2022, December). Big data market size - global industry, share, analysis, trends and forecast 2022-2030. Retrieved from https://www.acumenresearchandconsulting.com/ big-data-market Consulting, A. R. (2022, December). Data analytics market size. Retrieved from https://www.acu menresearchandconsulting.com/data-analytics-market Consulting, N. M. (2023, July). Explainable AI market. Retrieved from https://www.nextmsc.com/ report/explainable-ai-market Data, O. W. (2022). Annual scholarly publications on artificial intelligence. Retrieved from https:// ourworldindata.org/grapher/number-artificial-intelligence-publications © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Lipizzi, Societal Impacts of Artificial Intelligence and Machine Learning, Synthesis Lectures on Computer Science, https://doi.org/10.1007/978-3-031-53747-9
147
148
Bibliography
Ethnologue, W. (2023). List of languages by number of native speakers. Retrieved from https://en. wikipedia.org/wiki/List_of_languages_by_number_of_native_speakers FBI. (2012). Internet crime report 2021. Internet Crime Compliant Center. Forum, W. E. (2023). Future of jobs report 2023. Gartner. (2022, August). What’s new in the 2022 Gartner hype cycle for emerging technologies. Retrieved from https://www.gartner.com/en/articles/what-s-new-in-the-2022-gartner-hypecycle-for-emerging-technologies Grand View Research. (2023). Retrieved from Augmented Reality Market Size, Share & Trends Analysis Report. https://www.grandviewresearch.com/industry-analysis/augmented-realitymarket Granic, A. (2022). Educational technology adoption: A systematic review. Education and Information Technologies, 9725–9744. Guardian, T. (2015, 11 5). Adele’s new single breaks record for first week download sales. Retrieved from https://www.theguardian.com/music/2015/nov/05/adele-single-hello-bre aks-first-week-download-record Herath, H. G. (2022). Adoption of artificial intelligence in smart cities: A comprehensive review. International Journal of Information Management Data Insights. Institute, Q. B. (n.d.). History of artificial intelligence. Retrieved from https://qbi.uq.edu.au/brain/int elligent-machines/history-artificial-intelligence Intelligence, A. S. (2022). Our world in data. Retrieved from https://ourworldindata.org/grapher/ number-artificial-intelligence-publications Intelligence, M. (2023). Global fintech market size. Retrieved from https://www.mordorintelligence. com/industry-reports/global-fintech-market International, S. (2019, January). Levels of driving automation. Retrieved from https://www.sae. org/news/2019/01/sae-updates-j3016-automated-driving-graphic#:~:text=The%20J3016%20s tandard%20defines%20six,%2Dvehicle%20(AV)%20capabilities. Javed, A. R. (2022). Future smart cities: Requirements, emerging technologies, applications, challenges, and future aspects. Cities. LLP, A. A. (2023, February). Smart city platform market. Retrieved from https://www.einnews.com/ pr_news/618819570/smart-city-platform-market-expected-to-reach-usd-708-8-billion-by-2031top-players-such-as-aws-bosch-and-quantela Magazine, C. (2020, November). Cybercrime to cost the world $10.5 trillion annually by 2025. Retrieved from https://cybersecurityventures.com/cybercrime-damage-costs-10-trillionby-2025/ Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 370–396. McKinsey & Company (2022, June). How technology is shaping learning in higher education. Retrieved from https://www.mckinsey.com/industries/education/our-insights/how-technology-isshaping-learning-in-higher-education Medicine, N. L. (2023, January). Download PubMed data. Retrieved from https://pubmed.ncbi.nlm. nih.gov/download/ Nation, T. U. (2021). Food. Retrieved from https://www.un.org/en/global-issues/food#:~:text= After%20remaining%20virtually%20unchanged%20from,world%20faced%20hunger%20in% 202020 Newzoo. (2022, July). Newzoo global games market report 2022. Retrieved from https://newzoo. com/resources/trend-reports/newzoo-global-games-market-report-2022-free-version OpenSecrets. (2023, April). Lobbying data summary. Retrieved from https://www.opensecrets.org/ federal-lobbying Organization, I. L. (2023, January). Statistics on unemployment and labour underutilization. Retrieved from https://ilostat.ilo.org/topics/unemployment-and-labour-underutilization/
Bibliography
149
Orozco-Arroyave, G. M.-S.-G.-P.-T. (2022). Classification of poverty condition using natural language processing. Interdisciplinary Journal for Quality-of-Life Measurement, 1413–1435. Park, M. L. (2023). Papers and patents are becoming less disruptive over time. Nature, 138–144. Programme, W. F. (2020, June). 5 facts about food waste and hunger. Retrieved from https://www. wfp.org/stories/5-facts-about-food-waste-and-hunger#:~:text=One%2Dthird%20of%20food% 20produced,to%20feed%20two%20billion%20people. ReportLinker. (2023, June). Robo advisory market. Retrieved from https://www.reportlinker.com/ p06269568/Robo-Advisory-Market-Global-Industry-Trends-Share-Size-Growth-Opportunityand-Forecast.html ResearchAndMarkets. (2022, December). Intelligent transportation system market size. Retrieved from https://www.researchandmarkets.com/reports/3972828/intelligent-transportation-systemmarket-size?utm_source=BW&utm_medium=PressRelease&utm_code=t3k6mk&utm_cam paign=1841635+-+Intelligent+Transportation+System+Global+Market+Report+2022%3a+Sec tor+to+Reach+%2 Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 386–408. Russell, J., & Funk, J. O.-S. (2017). A dynamic network measure of technological change. Management Science - INFORMS, 791–817. SimplyPsychology. (2023, July). Maslow’s hierarchy of needs. Retrieved from https://www.simply psychology.org/maslow.html Statista. (2022, September). Volume of data/information created. Retrieved from https://www.sta tista.com/statistics/871513/worldwide-data-created/#:~:text=The%20total%20amount%20of% 20data,to%20more%20than%20180%20zettabytes. Stats, I. W. (n.d.). Internet growth statistics. Retrieved from https://www.internetworldstats.com/ema rketing.htm ThePhysicsFactbook. (n.d.). Power of a human brain. Retrieved from https://hypertextbook.com/ facts/2001/JacquelineLing.shtml Thomas Davenport, R. K. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98. TomTom. (2022). TomTom traffic index. Retrieved from https://www.tomtom.com/traffic-index/ U.S. Department of Education, O. o. (2023). Artificial intelligence and the future of teaching and learning. Washington, DC. Vaswani, A. A. (2017). Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 6000–6010). WIkipedia. (2023, July). Causes of income inequality in the United States. Retrieved from https://en. wikipedia.org/wiki/Causes_of_income_inequality_in_the_United_States Wikipedia. (2023). Information Age. Retrieved from https://en.wikipedia.org/wiki/Information_Age WIkipedia. (2023). List of artificial intelligence films. Retrieved from https://en.wikipedia.org/wiki/ List_of_artificial_intelligence_films Yahoo. (2022, December). ChatGPT gained 1 million users in under a week. Retrieved from https:// www.yahoo.com/video/chatgpt-gained-1-million-followers-224523258.html?guccounter=1& guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAL_bdW QnkS7XF8Ta5NyMWjuFQP4mIbiSoDhfh8n6FPZHRUHUFoKJBg7eB6wSm3xxf-DoZFeiJcj IR8kQIWH71UzP-RbxsGtR YouGov. (2022, March). US: Charting the rise of indie video games. Retrieved from https://business. yougov.com/content/41600-us-charting-rise-indie-video-games