255 41 3MB
English Pages [220]
Author: Mr. Tolga Akcay Contact: Mail Germany: Königsallee 2b 40212 Düsseldorf Germany Mail USA: 848 Brickell Ave. PH5 Miami, FL 33131 United States of America E-Mail: [email protected] Instagram: tolga.akcay07 Facebook: akcay.tolga07 Linked-In: tolga-akcay-683aa9146 Publishing year: 2021 Note: The work including all its parts is protected by copyright. Any use without the consent of the author is not permitted.
INTRODUCTION Are you ready for the “automation revolution” that we are about to experience? Regardless of what you think about new digital technologies like Internet of Things, Blockchain technology, and artificial intelligence, one thing is certain – they can no longer be ignored. These emerging technologies are fast transforming the way we live, communicate and how we work. If you have not noticed, then you will after reading this book because artificial intelligence is already present in most aspects of your life without your knowledge. Machines, as well as intelligent systems, are increasingly taking over more routine tasks that humans perform. Presently, majority of these AI machines and solutions are designed to assist us with different tasks in certain situations. They aid our intelligence and also boost our skills. For instance, in the automotive industry, automatic braking systems, blind-spot notification systems, collision warning, lane departure warning and several others are standard features you will find in recent cars. They are all possible courtesy of AI and these applications are not just increasing in number but are also getting more accurate. Did you know that when it comes to reviewing and annotating standardized contracts, AI already performs better than human lawyers? When you consider other AI applications such as data analytics in agriculture, chatbots in healthcare and customer service, Robo-advisors in banking and finance, blockchain technology in the energy sector, then you will agree with me that there is really no going back in this digital transformation we have started experiencing. Just like every other thing in life, things will keep improving and advancing. Now is the time to get used to the new kind of life where we live and work with smart and intelligent systems and machines. Trust me, machines will keep getting smarter and more independent so they rely less on human support. So, how exactly should we all prepare for this revolution? How do we take advantage of the emerging opportunities these technologies will create? Many will also be affected by the revolution. For instance, a Brookings Institution study revealed that 36 million individuals actually perform tasks that have “high exposure” to automation. So, in what ways can we deal with the challenges associated with artificial intelligence? Will advancement in autonomous weapons lead to a possible AI arms race? What will be the fate of humans if this happens? How can we mitigate the threats posed by the use of AI for criminal activities?
Find out answers to these questions and many more by going through this paper. This automation trend will undoubtedly accelerate and AI will soon turn out to be the new normal. Why will AI become the new normal? We can no longer view the existing digital technologies in isolation. The increase in the number of IoT devices and sensors will only result in more data and the more data we have the smarter and more autonomous AI systems will become. If we must make sense of the data available, then we require AI because analyzing such volumes of data requires a lot of time and energy. The AI space is increasingly getting the attention of top investors around the world and there is an increase in the acquisition of digital technology startups which is an indication that global firms are betting on these emerging technologies. Governments are already trying to let go of paper and adopt a cashless society. The expectations of consumers regarding digital technologies are fast changing as many of us now enjoy several AI-enabled devices and services without even being aware of them. But consumers are quick to express their disappointment whenever new tech is introduced – a sign that their expectations are higher as they crave faster, smarter and safer services. I have tried to provide a balanced content while exploring the possible benefits and challenges that we are likely to experience as we continue to witness increased integration of AI applications. As machines continue to get more advanced, autonomous and connected, the need for us to be prepared will also become more essential. So, how ready are you for this exciting AI World? Find out in the next 20 chapters of this piece. Here is your Compass. Let´s begin!
SECTION I: WELCOME TO THE WORLD OF ARTIFICIAL INTELLIGENCE
CHAPTER 1: UNDERSTANDING ARTIFICIAL INTELLIGENCE
Key Takeaway Alan Turing contributed to the development of AI. The Dartmouth Summer Research Project on Artificial Intelligence is believed to be the birthplace of AI. AI has to do with "the study of agents that receive percepts from the environment and take actions. The different types of AI include reactive machines, limited memory, theory of mind and self-awareness. Factors such as advanced computing architecture, availability of historical data sets, and deep neural networks are behind the resurgence of AI. Our world was introduced to the concept of artificially intelligent robots by science fiction in the first half of the 20th century. Perhaps the first was the “heartless” Tin man from the Wizard of Oz and then we also witnessed the humanoid robot that actually impersonated Maria in Metropolis. Our minds are always exploring new information and it was not long before a generation of philosophers, mathematicians and scientists started exploring the concept of artificial intelligence.
HOW IT ALL BEGAN: A HISTORY OF AI Alan Turing indeed changed history on two occasions and many in the AI space still
refer to his works. Apart from helping to break the Nazi encryption machine Enigma and assisting the Allied Forces to triumph in World War II, the renowned mathematician also made a significant impact in the computing world when he asked the question; “Can machines think?” So, in 1950, efforts to find out whether machines can think commenced with Alan Turing, publishing his paper on “Computing Machinery and Intelligence” in 1950 where he actually asked the question. His Turing test successfully established the primary goal and vision of artificial intelligence. He suggested that if we can make decisions with the available information and even reason to get solutions to problems, then why can machines not perform the same tasks? To test his hypothesis, Turing set up a simple heuristic; “Is it possible for a computer to engage in a conversation and even provide answers to questions in a manner that can convince a suspicious individual into believing that the computer was indeed a human?” You will be surprised to know that many people are still making use of the resulting “Turing test” after several years. So far, we have witnessed several decades of research and made tremendous advancements in AI and robotics. Although Turing could not completely create the first machines that can reason like humans, his test still sets the standard that researchers in the space are working toward achieving while ascertaining how further away we are from creating machines that can really think like humans. Several factors were responsible for his inability to completely accomplish his mission and one of such issues was that computers needed to fundamentally change. The generation of computers that were created back in 1949 were not having a major requirement for intelligence – they were unable to store commands. All they could do at the time was just to execute commands. So, what this simply means is that we could instruct the computers that existed in the days of Turing on what to do but they lacked the resources to recall what they did previously. Another factor that affected his research was the cost of obtaining a computer which was very expensive at the time. To lease a computer back in the early 1950s, you would require as much as $200,000 every month. In those days, those who had access to computers were big technology firms and prestigious universities. In 1950, another individual, Claude Shannon joined the group of early AI enthusiasts when he proposed the establishment of a machine that we can educate on how to play chess. The machine proposed by Shannon could either be taught to play by evaluating a small collection of the strategic moves of an opponent by brute force.
The Dartmouth Summer Research Project on Artificial Intelligence This is what majority of stakeholders believe to be artificial intelligence's birthplace. This program took place in 1956 and the initialization of the proof of concept was carried out through a program known as “Logic Theorist” by Allen Newell, Herbert Simon and Cliff
Shaw. The program which was funded by Research and Development (RAND) Corporation was created in a way that will imitate our ability as humans to solve problems. The program was hosted by John McCarthy and Marvin Minsky and presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). This conference is a landmark event and that is why stakeholders in this space regard it as the birthplace of AI because it catalyzed AI research for about two decades. It was during this event that a group of individuals successfully conceptualized the principle of AI. However, even though research efforts witnessed steady growth after 1957, the promises made by majority of early promoters of AI appeared too optimistic. This resulted in a season of “AI winter” where researchers received low funding and in the 1970s, interest in AI research dropped significantly. However, factors such as advances in computational power, renewed interest in AI as well as new funding resulted in a revival of activities in the AI space. Take a look at the timeline of the development of AI below.
Figure 1: Timeline for AI
The first major AI winter ended in the 1990s and this was mainly because of significant advancements in data storage and computational power that can handle complex tasks. In 1995, Richard Wallace introduced Artificial Linguistic Internet Computer Entity which can handle basic conversation. This marked a remarkable step forward in AI. IBM was not left out in the race as the company developed Deep Blue, a computer that leveraged brute force strategy to play the game of chess with Gary Kasparov who was the world champion at the time. Records show that IBM's Deep Blue was able to look ahead more than six steps and
was capable of calculating 330 million positions per second. Although Deep Blue was unable to beat Kasparov in the first match, it eventually won a rematch one year later. One of the companies that is a player in the AI field is DeepMind, a subsidiary of Alphabet Inc. In 2015, the company developed a software that is capable of playing the ancient game of Go against the world's best players. The software was trained to play the game by using an artificial neural network that had learned to play after being trained on thousands of human professional and amateur games. The software which the developers named AlphaGo actually beat Lee Sedol, the best player in the world at the time – four games to one. The developers of the program did not stop there; they took things a step further by allowing the program to play against itself via trial and error. The outcome of this phase was the creation of a program known as AlphaGo Zero. The new program was able to train itself faster and ended up beating the first program (AlphaGo) by 100 games to zero. Records show that AlphaGo Zero played without any form of human intervention but achieved the result simply by making use of historical data. Within 40 days, AlphaGo Zero surpassed every other version of AlphaGo.
The Present State of AI Figure 2: Subsets of AI
So, what is the current state of artificial intelligence? We have already witnessed remarkable progress in cloud computing, availability of big data as well as
computational and storage capacity. Also, “machine learning” (ML) is another breakthrough in AI technology and it has significantly enhanced the availability, growth, power and impact of AI. In chapter two, we will be examining what machine learning is and how it impacts AI. The increasing rate of technological progress is providing cheaper and more efficient sensors that are capable of capturing more reliable data needed by AI systems. We are also witnessing dramatic growth in the amount of data that is required by AI systems even as new sensors are becoming smaller in size and cheaper to deploy. This has led to remarkable progress in core AI research areas in the diagram below.
Figure 3: Core AI research areas
The truth is that some of the most amazing AI developments are not even within the field of computer science like medicine, finance, health and biology. We can conclude that the AI transition we are witnessing right now shares so much similarity with how
computers transformed from being utilized by a few specialized businesses to the broader economy and eventually, the average person out there had access to computers. It also reminds us of how internet access moved from being accessed by a few multinational companies and organizations to individuals around the world in the 2000s. We can conclude that the primary reason why AI is progressing at a remarkable pace is that the fundamental limit of computer storage which has hindered progress in that field 30 years ago is no longer an issue. What we are now experiencing is a fulfillment of Moore's Law. The law estimates that every year, the memory and speed of computers will double. Computer memory and speed has not only met our needs, but has also surpassed them and this explains why Deep Blue successfully defeated Kasparov in 1997. It is also the reason why Google's Alpha Go succeeded in defeating Ke Jie, the Chinese Go champion. It is also a justification for the increase in the number of research in AI.
WHAT EXACTLY IS ARTIFICIAL INTELLIGENCE? The answer to the question “what exactly is AI” depends on those you asked. For instance, the definition by the fathers of the field of AI, Minsky and McCarthy presented artificial intelligence as simply “A task executed by a machine that we would have initially believed would require human intelligence.” Although this definition is broad, its broadness explains why people argue whether something is truly AI or not. However, recent definitions of AI are now becoming more specific. For instance, an AI researcher at Google, Francois Chollet, provided a definition of what intelligence means. Chollet created the machine-learning software, “Library Keras.” According to him, “Intelligence is tied to the ability of a system to adapt and improvise in a new environment, to generalize its knowledge and apply it to unfamiliar scenarios.” Artificial intelligence does not have a universally accepted definition. But several efforts have been made to come up with a detailed description of an AI system. The focus of the description is to create one that is technologically neutral, understandable, applicable to both short and long-term horizons and technically accurate. This description is elaborate and covers several definitions including the ones we have already seen that are often utilized by policy, scientific and business groups. However, AI's expansive growth has resulted in several questions and debates to the extent that there is no specific definition that is universally accepted. Defining AI simply as building “intelligent machines” has one major drawback. This definition fails to explain the real meaning of artificial intelligence and what precisely makes a machine intelligent. The truth is that AI is an interdisciplinary science that has several approaches. Among the disciplines that contribute to AI are mathematics, engineering, computer science, linguistics, sociology, biology, philosophy, neuron science and psychology.
Figure 4: AI as an interdisciplinary science
Developments in fields such as machine learning as well as deep learning have led to the creation of a paradigm shift in almost all sectors of the entire technology industry. For the purpose of this book, we need a definition that closely relates to our discussion and I think the definition provided by authors Stuart Russell and Peter Norvig in their paper, “Artificial Intelligence: A Modern Approach” will suffice. They defined AI by unifying their work based on the theme of intelligent agents in machines. According to the authors, AI has to do with "the study of agents that receive percepts from the environment and take actions. In an attempt to define AI, there are four unique approaches that stakeholders and researchers have adopted over the years. The field of AI has been defined as: Thinking rationally
Thinking humanly Acting rationally Acting humanly Take a look at the first two approaches and you will agree with me that it has to do with thought processes and reasoning. The last two approaches to the definition of AI focus on behavior. An AI system comprises three key elements:
Figure 5: Key elements of AI
The function of sensors is the collection of raw data from our environment. On the other hand, the focus of actuators is to act on the data to alter the state of the environment. The primary power of an Artificial intelligence system rests mainly on its operational logic. For a particular set of goals, the operational logic actually delivers an output for the actuators based on data provided by sensors. Examples of this output include
decisions, recommendations or predictions that can significantly influence the environment's state. So, in summary, we can conclude that AI has to do with the design and development of computer systems that can execute tasks that would ordinarily require the intelligence of human beings. Such tasks include translation between two or more different languages, speech recognition, decision making, visual perception and several others. Some common examples of real-life AI systems include: Navigation systems Self-driving cars Human vs. Computer games Boston dynamics Chatbots Three Major ways AI can help Humans Basically, AI involves a set of technologies that can help humans solve several challenges by supplementing our competencies or even replacing them in some cases. The three elements of AI which we earlier discussed can further be simplified as the three areas of impact: Sensing: Artificial intelligence can either augment or even replace human sensory capabilities which will, in turn, speed up simple tasks like visual detection. A good example is the use of an AI system to automatically analyze street and traffic cameras in real-time. This use case, if well utilized by the right government agencies can help manage the flow of traffic, cut down on pollution and optimize public transport. Thinking: The second way AI can help humans is in thinking. AI and other new technologies like deep learning, machine learning and natural language processing are useful for analyzing and processing large volumes of data at a speed that exceeds that of humans and in some cases, more effective. Acting: When it comes to acting, AI as well as other related technologies like intelligent automation (chatbots and virtual assistants) are capable of relieving humans of simple decision-making jobs. Consequently, frontline employees can now direct their time and skills on other tasks that can significantly improve services.
Different Types of Artificial Intelligence There are basically four types of AI and each of them represents different stages of development we have attained in this space. So far, the type of AI models available belongs to the first two types of AI. We are yet to get to the third and fourth types of AI and key players believe we may get there within a few decades.
Figure 6: Types of Artificial Intelligence
Reactive Machines This class of machine works based on the most basic principles of artificial intelligence. As the name implies, they can only utilize their intelligence to simply perceive and react to their environment. They lack the ability to store memory and this also means they are unable to depend on previous experiences in making real-time decisions. Since reactive machines perceive the world directly, they are designed to accomplish just a limited set of specialized tasks. Limiting the worldview of reactive machines is not really some kind of cost-saving measure. Instead, this simply implies that this kind of AI will be more reliable and can only react
exactly the same way to a particular stimulus all the time. IBM's Deep Blue is an excellent example of a reactive machine. It was only able to identify various pieces placed on a chessboard, understand their moves in line with the rules of the game, acknowledge the present position of each piece and ascertain the next most logical move at that point. It lacked the capability to explore future potential moves that its opponent may make or place its pieces in the best positions. Deep blue viewed every turn as its reality and completely separate from previous movements ever made. Google's AlphaGo is another example of a reactive machine for playing games. Limited Memory The next types of machines are classified as limited memory artificial intelligence. They can store previous data and predictions as they acquire information and weigh potential decisions. So, limited memory AI essentially examines the past to have an idea of what may happen next. Apart from being more complex, limited memory artificial intelligence also opens the door to greater possibilities when compared to reactive machines.
Figure 7: Steps for using limited memory
This type of AI is created when a team of developers either successfully trains a model on how to analyze and use new data or they create a new AI environment that will facilitate the automatic training and renewal of AI models. There are six crucial steps that must be followed when utilizing limited memory AI in machine learning. The three main machine learning models that use limited memory AI include:
Long Short-Term Memory (LSTM) Reinforcement learning Evolutionary Generative Adversarial Networks (E-GAN). Theory of Mind This type of AI is simply theoretical and this means that we are yet to acquire the scientific and technological capabilities required to actualize this level of AI. The primary focus of this concept rests on the psychological premise of understanding that all things that exist (living things) have emotions and thoughts that influence their behavior. When it comes to AI machines, then this implies that AI has the ability to comprehend how other machines, animals as well as humans feel and make decisions via self-reflection and determination. Then they also proceed to use the information they acquired in making their decisions. What this means is that this type of machine can grasp and process the concept of “mind,” other psychological concepts in real-time, emotional fluctuations in decision making and eventually establishing a two-way relationship between AI and humans. Self-Awareness As soon as we are able to establish the Theory of Mind in AI which we just discussed, then the final stage of AI development is for it to become self-aware. AIs that fall under this category should have human-level consciousness and are fully conscious of their existence on earth. A self-aware AI is conscious of the presence of the emotional state of other people and machines and can understand the needs of others not just based on the information they provide but the way they even communicate or provide the information. This level of AI depends first on the understanding of human researchers on the premise of consciousness and knowing how to replicate it before building it into machines.
MAJOR CATEGORIES OF AI Artificial intelligence is also separated into three broad categories at a very high level.
Figure 8: Categories of AI
Narrow AI This is the type of AI that most of us know and see daily around us especially in computers and it is often regarded as “weak AI.” They are intelligent systems that have learned several ways to execute different tasks without a developer programming them to carry out such tasks. You can see them in speech and language recognition AI like some of the examples I earlier shared. So, examples of narrow AI include Siri virtual assistant which you can find on Apple iPhone. Other examples include machine intelligence in the recommendation engines most of us see that make suggestions on products we may like in line with what we have earlier purchased. It includes the vision-recognition systems that we have on self-driving cars. The reason why these systems are regarded as narrow AI is that, unlike humans, they are only capable of learning or being educated on how to carry out specific tasks. The list of emerging applications for narrow AI has been increasing over the years even as interest in the space continues to increase. Narrow AI can be used for: Organizing business and personal calendars. Flagging online content that is inappropriate and also detecting wear and tears in elevators based on the available data obtained from the Internet of Things (IoT) devices. Helping to interpret video feeds provided by drones that perform visual inspections of all kinds of infrastructure like oil pipelines. The generation of a 3D model of our world by utilizing satellite imagery.
Promptly responding to simple customer service inquiries. Assisting radiologists to locate potential tumors found in X-rays. Coordinating with different intelligent systems to execute tasks such as booking a hotel at a suitable location and time. There has been a rapid development of new applications of some of these learning systems. For instance, Nvidia, a graphics card designer recently came out with Maxine, an AI-based system. The system was designed to provide users with excellent quality video calls that are not affected by how fast the internet connection is. What the system does is to cut down on the required bandwidth for making such calls. Well, I must point out that even though these AI systems have much-untapped potential, our ambitions for the technology often tend to exceed reality. One striking example is that of self-driving cars that require the services of AI-enabled systems like computer vision to function. A look at the original timeline offered by Elon Musk, the CEO of the electric car company Tesla shows that they are presently lagging behind in their quest to upgrade from their system's limited assisted-driving capabilities to their autopilot systems for cars to “full self-driving” cars. The company only managed to come out with the Full Self-Driving option which is only available to a specific number of expert drivers even as the company is in the beta testing phase.
General AI This is the second type of artificial intelligence and they are completely different from the first type. Also known as artificial general intelligence, this is a type of AI system with an adaptable intellect which is found in humans. This type of intelligence is flexible and can actually learn to execute a wide range of tasks which includes reasoning about several topics in line with its accumulated experience, haircutting and even creating spreadsheets. One easy way to understand this type of AI is to think about the movies. Most of the AI we have seen in movies such as “Skynet” in The Terminator or “HAL” in 2001 are excellent examples of a general AI. However, you can also agree with me that such systems are not yet in existence. There is a strong debate by most AI experts on how soon we can create such kind of AI. Things Artificial General Intelligence (AGI) can do When it comes to AGI, autonomous machines would eventually have the ability to take general intelligent action. They can act like humans – they can generalize and engage in abstract learning across various cognitive functions. Here are some other things an AGI system can do: This group of AI systems would possess strong associative memory and would be capable of decision making. They can handle or solve multifaceted problems. AGI systems can also create concepts. They can learn via experience or reading. They would be capable of perceiving the world as well as themselves.
They can actually anticipate and react to the unexpected in environments that are complex. An AGI system can invent and be creative. Generally, experts agree that while artificial narrow intelligence (ANI) will lead to new opportunities, risks and challenges; these consequences will be significantly amplified with the emergence of an AGI. Based on the results of a 2012/13 survey which was carried out among four groups of experts by Vincent C Müller and Nick Bostrom, there is a 50 percent chance that we will eventually develop Artificial General Intelligence (AGI) between 2040 and 2050 and this figure will further increase to 90 percent by 2075.
Evolutionary Computation There is one more area of AI research that we have not talked about and that is evolutionary computation. This type of AI leverages Darwin's theory of natural selection and has to do with “genetic algorithms” going through random mutations and combinations between different generations just to transform into an optimal solution to any particular issue. Interestingly, some experts have leveraged this approach to develop AI models – effectively building AI with AI. The term neuro-evolution refers to the process of optimizing neural networks by using evolutionary algorithms. This approach may eventually make a vital contribution to the design of highly efficient AI systems even as we witness an increase in the use of intelligent systems and as the supply of data scientists fail to meet the existing demand. One excellent example of this technique was displayed by Uber AI Labs. The company came out with papers showing the use of generic algorithms to train deep neural networks that will be used for reinforcement learning issues. We also have several expert systems where developers program computers with certain rules that enable them make a variety of decisions based on a wide range of inputs. This enables such machines to mimic human behavior in certain domains and an excellent example is an autopilot system that operates a plane.
AI RESEARCH: FACTORS PROPELLING THE RESURGENCE OF AI It is interesting to note that AI has been witnessing a significant resurgence and some experts in the field who had witnessed the emergence of the AI industry back in the 1980s are getting excited about this renewed interest. AI has undoubtedly experienced several cycles of promise resulting in investments that failed to deliver as promised. The under-delivery of the investments previously made in AI research was largely due to high expectations and the failure to meet expectations led to a reduction in the level of funding in the field – which also translates to a reduction in the growth of the AI space. But recent trends reveal that the attention AI is getting is quite more remarkable than previous ones. In fact, in the latter half of 2014, over half a billion dollars was pumped into the AI industry which is an interesting development. So, the question is, why the
sudden renewed interest in AI? What are the factors driving this increasing activity in the industry? I have earlier mentioned some factors that are responsible for the growth in the space. We are now seeing several companies like IBM, Apple, Google, Microsoft, Amazon and Facebook all investing large sums of funds to research and create AI that companies can easily access. Let’s go through some key factors that have played a key role in accelerating the rate of innovation in this space. Advanced Computing Architecture The availability of the right infrastructure, appropriate speed and scale has given rise to bolder algorithms that can handle more challenging issues. We now have faster hardware that are even supported by several processors. Cloud services are also available and what researchers and developers previously did in specialized labs where they can access specialized computers can now easily be deployed to cloud at a significantly lowered cost with less stress. We can conclude that the proliferation of start-ups in the AI industry was facilitated by the availability of democratized access to essential hardware platforms where they can run AI. Of course, even the fastest CPUs are not capable of handling machine learning. To enable coaching and inferencing machine learning (ML) versions, there is a need to complement CPUs with a new class of chips. This also explains why there has been a steady increase in the demand for Graphic Processing Units (GPU). They were formerly designed for high-end gambling PCs as well as workstations. But GPUs are now highly needed since they facilitate the ML training procedure. Access to Large Historical Data Sets Previously, it was quite expensive to save and access information but that changed with the establishment of cloud-based firms. Now, authorities, organizations and academia are easily unlocking the information once restricted to the old magnetic discs and cassette cartridges of the 1980s. In a bid to educate ML models, data scientists require access to massive historical datasets as this will enable them make predictions with improved precision. You should understand that one factor that directly determines the efficacy of any given machine learning version is the dimension and quality of the available dataset. Researchers require massive datasets that possess a variety of data issues to enable them deal with various complicated issues such as discovering cancer, treating AIDS and other complicated challenges we face. Now we see healthcare institutions, government agencies, and other institutions creating unstructured information that researchers can easily access and this is because information storage and recovery is increasingly getting more economical. Researchers from different fields can now gain access to wealthy datasets and this is one outstanding factor that has tremendously influenced AI research. We cannot just conclude that data was a key player in the resurgence of AI, rather, it is
large amounts of data with sufficient semantics added to render it useful enough to make a significant impact. Did you know that researchers only gained access to vast stores of slightly curated data in the last 5-10 years? Have you ever wondered how IBM Watson could have won Jeopardy without access to content found in Wikipedia or some info repository that is rich in bigram and trigram key phrases? How could deep learning identify cats or even recognize several objects within images without having access to vast amounts of data – training set of labeled images. Big data is indeed one key factor that changed the game for AI since it offered researchers sufficient semantics to enable them not only to detect complex patterns but also to meaningfully generalize such patterns by using labels. The combination of large amounts of data and increased performance of computer components will eventually boost the next-generation AI activities. Information scientists and researchers are increasingly getting empowered to innovate at an amazing pace in the AI industry due to their access to rich datasets along with new and enhanced computing architectures. Deep Neural Networks One of the factors that have played the most remarkable role in the resurgence of AI is the progress in profound learning as well as artificial neural networks. When it comes to Artificial Neural Networks (ANN) it involves substituting existing machine learning versions to create more accurate versions.
New Trends in AI Research There have also been rapid changes in the nature of research that fuels the AI revolution. Of course, as I earlier mentioned, the maturation of machine learning is at the top of these factors as it was partly stimulated by the booming digital economy that not only provides large amounts of data but also leverages data. The increase in the development of cloud computing resources as well as consumer demand for access to various services like navigation support and speech recognition are all other factors influencing AI research. The remarkable increase in the performance of information processing algorithms also combines with the progress so far recorded in hardware technology for object recognition, sensing and perception. Other factors that have tremendously stimulated advances in AI research include the emergence of new platforms and markets for products that are data-driven as well as the economic benefits associated with discovering new markets and products. As AI gradually becomes a central force in our lives, we are now seeing a shift in AI toward creating intelligent systems that are capable of effectively collaborating with people and are even more human-conscious. This includes the creative strategies for developing scalable and interactive ways for us to educate robots. Let's take a look at some trends that are behind the present “hot” areas of AI research into application
areas and fundamental methods: Deep Learning: Although we shall be looking at deep learning in chapter three, it is crucial to mention that this class of learning procedure has enabled video labeling, object recognition in images as well as activity recognition. This aspect of AI is making an outstanding impact on areas like speech, audio and natural language processing. Robotics: This field is focused mainly on ways to train a robot to properly interact (both in predictive and generalizable ways) with the world around it. It is also focused on facilitating the manipulation of objects in interactive settings and human interaction. Large-Scale Machine Learning: This is focused on designing learning algorithms in addition to enhancing available algorithms to enable them function with very large datasets. Computer Vision: You can see this as the leading form of machine perception and a sub-area of AI that has been transformed the most by the rise of deep learning. Now, computers can execute some vision tasks even better than humans. When it comes to computer vision, the focus of most of the existing research is on automatic image as well as video captioning. Internet of Things (IoT): Research in this field is focused on the idea of interconnecting several objects such as cameras, appliances, cars, etc., to enable them acquire and share a large volume of sensory information mainly for intelligent purposes. Is AI a Blackbox? While reading about AI and its features or benefits, one term you are likely going to see is “black box.” It simply implies that something happening is complicated to the point that most individuals may not even grasp how it gets its results. The phrase, black box describes a system that acquires a set of input from any available source and provides an output. The output may be in the form of a Boolean value - yes or no; it could be a probability distribution or a projected price trajectory. Well, some people consider AI models as black boxes not really because they are complicated. So, why are they regarded as black boxes? A possible explanation for this belief is that they are not easily visualized or interpreted and this is because they are extremely recursive (DEFINE) just like deep learning. It could also be due to the fact that they exist in a large number of dimensions just like support-vector machines (SVMs). One more reason why they are seen as black boxes is that their input signals are unknown. Well, in a bid to engender accountability and further promote the audit of decisions, efforts are now being made to deal with this issue and this is the reason for the new move toward explainable AI (XAI) or AI that non-experts can easily understand.
Chapter 2: Machine Learning & Recent Developments in AI
Key Takeaway Machine learning is simply a subfield of artificial intelligence. Types of machine learning include supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning. The increased volume of data and parallel-processing power are two key factors for achievements in machine learning. Some common machine learning use cases are email filtering, Google search, and Google Translate. Services for machine learning are already being provided by top cloud platforms like Microsoft, Amazon Web Services and Azure.
WHAT IS MACHINE LEARNING? One of the most promising subfields of artificial intelligence is machine learning. It involves the process where systems can “learn” through statistics, trial and error as well as data to enable them optimize processes and also innovate much faster. Machine learning empowers computers to develop human-like capabilities that make it possible for them to resolve various challenges facing the world like climate change, cancer, HIV/AIDS and several others. So, in what ways is machine learning empowering computer systems with human-like capabilities? The process of machine learning is automated and all through the learning process, it is usually fine-tuned based on the machines’ experiences. The machines are fed with high-quality data and machine learning models are developed with different algorithms which we shall look at shortly. The type of algorithm used is based on the available data as well as the kind of activity that is being automated. One question that comes to mind at this point is, how exactly does machine learning differ from traditional programming? The answer is simple – we feed the input data (and a well-developed and tested program) into a machine to enable it generate an output. But this is not the case with machine learning as input and output data are both fed into the machine in the course of learning and the machine will work out a program for itself. Take a look at the illustration below.
Figure 9: Machine learning process
Generally, computer programs often depend on code to inform them on the things they should do or the information they should store. This is also regarded as “explicit knowledge” which encompasses things that can easily be recorded or written such as videos, manuals or textbooks. Presently, computers are acquiring tacit knowledge – knowledge acquired from context and personal experience – courtesy of machine
learning. It is hard to transfer this kind of knowledge from one individual to another through verbal communication or text. An excellent example of tacit knowledge is facial recognition. Have you observed that when we recognize people's faces, it is not always easy to explain how or why we even recognize them accurately? What happens is that when we see a person, we depend on our personal knowledge database to tacitly make the conclusions and recognize an individual based on their face. Have you ever tried explaining how to ride a bike to a friend or family member before? You will agree with me that it is usually an easier task to just show them exactly how to ride a bike than trying to explain how it is done. This is also what machine learning is all about. It is no longer compulsory for computers to depend on billions of lines of codes before they can execute calculations. With machine learning, they now have the power of tacit knowledge and this enables them easily make such connections, identify patterns and also leverage the things they already learned before in making predictions. The use of tacit knowledge by machine learning has undoubtedly made it extremely useful for virtually all industries – government, fintech, weather, healthcare, etc. We will be looking at how AI and machine learning are being used in different industries in section II. Deep Learning One subfield of machine learning that is also increasingly gaining traction is deep learning. Deep learning is getting more useful because of its unique ability to accurately extract data. To extract higher-level features from raw data, deep learning leverages Artificial Neural Networks (ANN). More on deep learning later in this chapter.
Common Types of Machine Learning
Figure 10: Types of machine learning
For machine learning to establish parameters, actions and end values, it also requires algorithms just like all systems with AI. The purpose of these algorithms is to serve as a guide for machine-learning-enabled programs while they go through several options and evaluate various factors. Computers actually use hundreds of algorithms based on different factors such as diversity and data size. I will not be going through all the available machine learning algorithms because that is beyond the scope of this book. But I will briefly discuss the most common types. Supervised Learning These types of algorithms help create mathematical models of data containing input and output information. Another word for supervised learning algorithms is training data and the reason for this name is that the programs understand the beginning and end results of the data. What it simply needs to do is to determine the most efficient way to achieve the result. To enable machine learning programs predict outputs based on a new set of inputs, they are constantly provided with these sets of supervised learning algorithms. Two examples of supervised learning algorithms that are more popular than the others are classification and regression algorithms. Another name for regression analysis is linear regression and this algorithm is used in discovering and predicting relationships between outcome variables and at least an independent variable. It also serves as training data to enhance the ability of systems to predict and forecast. The second most popular type of this set of algorithms are classification algorithms and their purpose is for training systems on object identification and placement in the right
sub-category. An excellent example is the use of machine learning by email filters to automate incoming email flows for spam, promotion and primary inboxes. Systems are usually exposed to a wide range of labeled data and this could be images of handwritten figures annotated to signify the number they correspond to. When a supervised-learning system is provided with sufficient examples, it would learn to identify the clusters of shapes and pixels linked with each number and finally identify handwritten examples and even distinguish between numbers 9 and 4, or 8 and 6 reliably. It is important to know that the training of these systems often demands vast amounts of labeled data. Some systems may need exposure to several millions of examples before they can finally master an activity or task. Unsupervised Learning Algorithms Unlike the first category of machine learning, unsupervised learning requires algorithms to identify patterns in data as it attempts to discover similarities that separate the data into categories. For instance, Airbnb clustering together homes that are available to rent based on different neighborhoods is a good example of unsupervised learning. Another example might also be Google News categorizing stories on topics that are similar every day. This set of learning algorithms are not designed to separate specific types of data. Instead, they are designed to search for data that they can group based on similarities or for irregularities that stand out. Semi-Supervised Learning The rise of semi-supervised learning may eventually lower the importance of vast sets of labeled data required for training machine learning systems. The name already explains what it means –training that is supervised and unsupervised. It is a method that trains systems by depending on a large amount of unlabeled data as well as a small amount of labeled data. A machine learning model will be partially trained with the labeled data and the partially trained model will, in turn, be used to label the unlabeled data. The entire process is often regarded as pseudo-labeling. The resulting mix of pseudolabeled data and labeled data will be used for training the model. Recently, semisupervised learning's viability has been enhanced by Generative Adversarial Networks (GANs). They are machine learning systems that are capable of generating entirely new data with labeled data which will assist in training a machine learning model. Once we get to the point where semi-supervised learning possesses the same level of effectiveness as supervised learning, then access to large, labeled datasets will no longer be as important to us for successfully training machine learning systems as access to vast amounts of computing power. Reinforcement Learning An easy way to understand what this type of machine learning means is to consider the process of learning an old-school computer game for the very first time, especially when the person is not conversant with ways of controlling the game or the rules. Although
they initially tend to be a complete novice, with time, their performance will continue to improve as they understand the relationship between the different buttons they press, the actions that take place on the screen and the points they scored. Google DeepMind's Deep Q-network is perhaps the best example of reinforcement learning. Interestingly, the system has already defeated humans in different vintage video games. During the training process, the system is supplied with pixels from every single game and then it determines different information regarding the state of the game and this includes the distance between objects on the screen. Next, the system will examine how the state of the game and actions being performed in the game affects the score it gets. In the course of playing the game repeatedly, the system will successfully establish a model that has learned from many cycles and will maximize the score.
FACTORS BEHIND THE SUCCESS OF MACHINE LEARNING First, you should know that machine learning is not a new training technique; instead, interest in the field increased dramatically in recent years. Perhaps some of the factors that are responsible for this resurgence include a series of breakthroughs as deep learning was able to set new records for accuracy in fields such as computer vision, speech and language recognition. Experts agree that two key factors that were mainly responsible for the successes achieved by machine learning are: The increased volume of data – video, speech, text and images – that were used for training machine learning systems. The second and perhaps the most important factor is the emergence of vast amounts of parallel-processing power which was made possible by modern graphic processing units (GPUs) just as I earlier mentioned in chapter one. These GPUs can actually be assembled to create machinelearning powerhouses. Anyone can easily train machine learning models today with these GPU clusters as long as they have an internet connection which will enable them use cloud services such as Microsoft, Google and Amazon. One of the new areas that have also emerged (as a result of the increased popularity of machine learning) is the provision of specialized hardware that is designed specifically for running and training machine learning models. Google's Tensor Processing Unit (TPU) is a great example of such custom chips that can accelerate the rate at which newly built machine learning models (using TPU software library) can infer information from data in addition to the rate at which the models can also be trained. These chips are increasingly becoming useful for training machine learning models. They have been used to train things like: Google Brain Google DeepMind Models that underpin image recognition in Google Photo
Google Translate Also, they are used to train services that permit other individuals to create machine learning models with Google's TensorFlow Research Cloud. During Google's I/O conference back in 2018, the third generation of these chips were actually unveiled. They have already been arranged into machine learning powerhouses known as pods. These pods are capable of executing over a hundred thousand trillion floating-point operations per second (100 petaflops). What is even more interesting is the fact that Google's fourth-generation TPUs were 2.7 times faster (in MLPerf) than the generation that preceded it. Just to clear things up; MLPerf is a benchmark that measures how fast a system can execute inference using a trained machine learning model. Google has also been able to significantly enhance its services that were built on machine learning models courtesy of the constant upgrade of their TPU. The upgrades have helped cut down the time required to train the models they use in Google Translate by half. Presently, machine learning tasks are now easily executed even on consumer-grade computers and smartphones instead of in cloud datacenters and this is because of two factors. First, we are now having hardware that are increasingly becoming specialized. Another reason for this new development is that machine learning software frameworks are now refined.
Machine Learning Use Cases
Figure 11: Common use cases
As you go through this book, you will discover that machine learning is involved in most AI-related systems – from driverless cars to web services, and even in smartphones. I will be looking at machine learning use cases in web services and later in the next
section, we shall go through other AI use cases. On daily basis, we frequently interact with several applications, but most of us are not really thoughtful enough to determine what really makes such programs function optimally. Well, you will be surprised to know that machine learning plays a vital role in the smooth functioning of most applications we use online. Let's examine a few of such applications that most of us are familiar with. Email Filtering Most email services now leverage machine learning to help organize our inbox and ensure we get each mail sorted out based on their content. In the early days when most of us used email services like Yahoo mail and Gmail, there were not as many options as we have now. It is most likely that your inbox was full of unread emails which was a mixture of both useful ones and those you could have ignored. It would be hard to separate the emails to enable you to focus on each category. Machine learning helped to make our lives much easier by helping to filter our emails based on different subjects. Gmail is an excellent example since most of us use it. Google's machine learning algorithm now functions seamlessly for end users after being trained on millions of emails. We have the option of selecting different categories from the default labels – primary, social, and promotions. The algorithm can instantly identify and categorize our emails into each of the three default labels whenever we receive an email. If the machine learning algorithms of Gmail believe that an email belongs to the “Primary” email category, we instantly receive an alert. I have also noticed that my phone does not send me an alert when I receive emails in my promotion, spam and social inboxes. This is indeed one feature most of us are grateful for and over the years, these algorithms have continued to get smarter in making decisions on different emails we receive. Of course, one of the things that made this possible is the availability of data which Google has in abundance. Google Search This, in my opinion, is the most popular use case of machine learning and the reason for this is that most of us have at some point used Google Search. Well, we often tend to take it for granted that Google will always provide the best results even before we open our computers. But have you ever considered how the popular search engine functions? Those who understand perfectly how Google search works are those who were involved in the design. But all this would never have been possible without machine learning. Perhaps one major factor that has helped Google to design its search engine to serve us well over the years is the amount of data at their disposal. I doubt if there is any calculator that can estimate the number of queries the search engine has received in the past two decades. The data is indeed a treasure for data scientists. Google Translate Some people have actually succeeded in learning some foreign languages courtesy of
this application. Researchers also find it extremely useful especially when they come across a text in foreign languages that are instantly translated by Google Translate. Interestingly, Google Translate understands each sentence that a user sends and then converts them to any language they want by leveraging machine learning.
Available Services for Machine Learning There are many services for machine learning in existence now. In fact, the major cloud platforms which include Google Cloud Platform, Microsoft Azure, and Amazon Web Services all provide users access to any hardware they require to help train and run machine learning models. Also, Cloud Platform users are allowed to test Google's Tensor Processing Units that are designed specifically for training and running various machine learning models. Among the things users can enjoy in the cloud-based infrastructure are first of all data stores that contain the large amount of data required for training. Also, it contains services that will prepare the data for analysis in addition to visualization tools that will showcase the results properly. Some recent services have taken things one step further by streamlining the creation of custom machine learning models while Google offers Cloud AutoML – a service that automates the entire creation process of AI models. It is a drag and drop service that (which does not even require a user to have machine learning skills) can build custom image-recognition models just like Microsoft's Azure machine learning studio. Amazon is not left out as it also has its AWS that is designed to facilitate the training process of machine learning models. Data Scientists can make use of Google Cloud's AI Platform. It is a managed machine learning service that provides users the right tools for training, deploying and exporting custom machine learning models based on the open neural network framework Keras or Google's open-sourced TensorFlow ML framework. The best option for database admins who have no background in data science is Google's BigQueryML. It is a beta service that enables admins to actually call trained machine learning models with SQL commands. This allows predictions to be made in a database – a more straightforward alternative to exporting data to a different machine learning and analytics environment. Organizations that are not interested in building their machine learning models are not also left out as these cloud platforms provide ondemand, AI-powered services like language, vision and voice recognition. In addition to the general on-demand services provided by IBM, the company is making efforts to sell sector-specific AI services ranging from retail to healthcare. They grouped all these services under the IBM Watson umbrella. NVIDIA, in September 2018, came out with a combined software and hardware platform that was built in such a way that it can be installed in datacenters that are capable of accelerating the rate at which trained machine learning models can execute video, image and voice recognition in addition to different machine learning services.
Chapter 3: Advances in Machine Learning & Deep Learning
Key Takeaway The AI space lacks sufficient machine learning experts and data scientists. Developers are creating new programs that develop better machine learning with machine learning. Deep learning is a subset of machine learning and is essentially known as neural networks that attempt to mimic how the human brain behaves. Deep learning algorithms are capable of ingesting and processing unstructured data such as images and texts. Deep learning applications are often found in a variety of real-world applications that we use daily.
Our discussion of Machine learning and AI will be incomplete without talking about algorithms. You must have come across the term algorithms in this paper. So, what are they? You can see them as a set of well-defined instructions for a computer on how to manipulate, interact and transform data. So, an algorithm can be as complex as detecting a person's face in a picture or as simple as adding a column of numbers. Before an algorithm can ever be operational, it needs to be composed as a program that a computer can understand. Some of the languages used for writing machine learning algorithms include Java, Python or R. These programming languages all have machine learning libraries that support several machine learning algorithms. There is a significant difference between machine learning algorithms and other algorithms.
Machine Learning Algorithms Designed for Writing ML Algorithms The AI space currently lacks sufficient number of machine learning experts and data scientists. However, developers are coming up with new programs that develop better machine learning with machine learning. This is commonly regarded as automated machine learning (AutoML). Also regarded as “democratizing machine learning,” it enables organizations to deal with complicated business challenges using machine learning. The level of automation in the entire process enables individuals that lack the technical expertise of a data scientist to actually use machine learning techniques and models. Here are the things involved in the ideal machine learning pipeline:
Figure 12: Machine learning pipeline
While all the operations above demand a reasonable level of expertise from an experienced person to implement and even requires a fair amount of time, AutoML can make things easier. It helps to automate the entire process and cuts down on the time
required to complete the job. Apart from making things easier, it is also faster though not necessarily better. But since the system is automated, it helps to prevent human errors and can even develop models that are more functional than the ones produced by humans in some cases.
THE EMERGENCE OF DEEP LEARNING (NEURAL NETWORKS) As promised in chapter one, let's dive deeper into deep learning. This is a subset of machine learning that is essentially a neural network comprising three or more layers. The neural networks try to simulate the human brain's behavior (though still far from being able to match its ability) to enable it learn from large volumes of data. It is possible for a single-layer neural network to make approximate predictions, however, its accuracy can be refined and optimized with the addition of hidden layers. Deep learning is used for a good number of AI applications as well as services that enhance automation, executing analytical and physical activities without any assistance from humans. You can find deep learning in most products and services that you use – voice-enabled TV remotes, digital assistants, credit card fraud detection, etc. It is also used for emerging technologies like self-driving cars.
Deep Learning Vs. Machine Learning Earlier, I mentioned that deep learning is a subset of machine learning. But do they differ in any way? Well, one way deep learning differs from machine learning is in the kind of data that it requires to function as well as how it learns. On the part of machine learning, it makes predictions by leveraging structured labeled data. This implies that certain features are defined from the input data for the model and arranged into tables. But you should understand that machine learning still makes use of unstructured data but before it does, it has to pass through certain pre-processing to ensure that the unstructured data is organized into a structured format. On the part of deep learning, the data pre-processing that is part of machine learning is eliminated. Deep learning algorithms are capable of ingesting and processing unstructured data such as images and texts. Also, it can automate feature extraction which lowers its dependency on human experts. Both types of learning (Machine learning and deep learning) are capable of various types of learning too – supervised, unsupervised and reinforcement learning.
How Deep Learning Works Primarily, Deep learning neural networks try to mimic our brain via a combination of bias, weights and data inputs. All these elements when combined function to accurately recognize, classify and describe different objects found in the data. Deep learning neural networks comprise of several layers of interconnected nodes and each of these nodes build upon the previous layer in a bid to refine and optimize the categorization or
prediction. The name given to this progression of computations via the network is known as forward propagation. The term visible layers refer to the input and output layers of a deep neural network. The deep learning model ingests the data for processing at the input layer while the final classification or prediction is made at the output layer. Also, backpropagation is another process that calculates errors in predictions using algorithms such as gradient descent and then adjusts the weights as well as biases of the function by moving backward via the layers in an attempt to train the model. But these two types of deep neural networks are actually the simplest; deep learning algorithms are extremely complex with a variety of neural networks focused on dealing with specific datasets or issues. Other types of neural networks include: Recurrent neural networks (RNNs) Convolutional neural networks (CNNs) Generally, deep learning applications are often found in a variety of real-world applications that we use daily. But users are often not aware of the complex data processing that is going on in the background because they have been well-integrated into most of our products and services. Some common applications are in: Law enforcement where it helps in analyzing and learning transactional data to enable law enforcement agents detect harmful patterns that are signs of possible criminal or fraudulent activity. Financial services: Predictive analytics is used by some financial institutions for algorithmic trading of stocks. Also, they are used for fraud detection, assessing business risks before approving loans and they assist in managing credit and investment portfolios for customers. Customer service: Also, many firms incorporate deep learning into their customer service processes.
SECTION II: STRIKING AI USE CASES IN DIFFERENT INDUSTRIES
Chapter 4: AI Optimized Healthcare: No More Science Fiction Key Takeaway The United States still maintains the top spot on the list of countries with firms that have the highest VC funding related to the use of AI in healthcare. AI in healthcare has to do with the use of complex algorithms designed to execute healthcare-related tasks without human control. The three health conditions that are often believed to provide big opportunities for new healthcare trends like precision medicine and pop health are cancer, diabetes and heart disease. IBM's Watson for Health is proving to be extremely helpful as it is assisting healthcare firms to apply cognitive technology to access large amounts of health data and power diagnosis. AI can also help in pregnancy management by monitoring both mother and fetus.
Presently, governments, healthcare decision-makers, innovators, investors and even the European Union are all interested in AI. This also explains why the number of governments setting up aspirations for AI in healthcare is steadily increasing each day. Countries such as the United States, Finland, Israel, the United Kingdom, China and Germany are all making serious investments in research related to AI. The private sector is not left out as funding in healthcare-related AI research has increased over the years. For instance, back in 2016, estimates made by Frost & Sullivan show that the AI healthcare market is expected to grow from $0.66 billion in 2014 to about $6.7 billion in 2021 which translates into a growth of over ten times what the industry recorded in 2014. Interestingly, the industry eventually witnessed this level of growth. Well, in another study that took place in 2019, it is estimated that the sector will grow from about $1.3 billion in 2018 to $13 billion in 2025 which is about a 41.7 percent compound annual growth rate for the AI healthcare market. Undoubtedly, there has been a dramatic increase in investment in AI healthcare and experts expect this growth to continue. In 2019, A digital health technology venture fund, Rock Health disclosed that almost $2 billion was invested in AI healthcare firms. Within the first quarter of 2020, the total investments that have been made in this sector had already reached $635 million. Experts are of the view that one major factor that might have played a key role in this growth is the outbreak of COVID-19. The United States still maintains the top spot on the list of countries with firms that have the highest VC funding related to the use of AI in healthcare. Also, you will find the most completed AI-related healthcare research studies and trials in the United States. However, when it comes to the fastest growth in this space, Asia is undoubtedly the location, especially China. There are top global conglomerates as well as tech players in Asia with healthcare AI services that are specifically designed for consumers. In fact, there are over 300 million users of Ping An's Good Doctor, a top online health management platform. Artificial intelligence systems are not just learning to do the things humans do, but they are gradually doing some of these things faster, more efficiently, and at a lower cost. Indeed, the potential for AI as well as robotics in healthcare is massive and I will attempt to focus on the most promising ways AI will revolutionize the healthcare ecosystem. AI is expected to transform key aspects of healthcare which include clinical trials, pharmaceuticals, market growth, medical diagnostics, and several others. We shall be looking at four key aspects of healthcare where AI use cases are most useful. Figure 13: AI Use cases in healthcare
AI in healthcare can significantly improve the quality of life and health outcomes for us in years to come. However, AI-based applications need to gain the trust of nurses, doctors and patients before they can make the expected impact. There is also the issue of commercial, regulatory and policy challenges that need to be resolved before we can see a massive adoption of AI in healthcare.
Just to be sure you understand what AI in healthcare means; it has to do with the use of complex algorithms designed to execute some healthcarerelated tasks without human control. This class of algorithms is capable of reviewing, interpreting and also making suggestions on possible solutions to complex medical issues when doctors, scientists and researchers provide it with the necessary data. The truth is that we may not be able to explore all the different applications of AI in healthcare. Experts believe that what we know presently is just similar to “scratching the surface” of AI's capabilities in healthcare. Well, even though this is amazing, it is also frightening because there could also be potential risks associated with it. According to HIMSS Media research, AI and machine learning are expected to provide benefits for the treatment of several health conditions. However, the top chronic health conditions that are expected to enjoy more benefits include:
Figure 14: Top chronic health conditions to benefit most from AI and artificial intelligence
Generally, the three health conditions that are often believed to provide big opportunities for new healthcare trends like precision medicine and pop health are cancer, diabetes and heart disease. This also explains why AI and machine learning can help individuals with such health conditions too.
THE HEALTHCARE INDUSTRY AND AI: USE CASES
Figure 15: AI use cases in healthcare courtesy of Accenture Analysis
Please bear in mind that “value” in this case refers to the estimated potential annual benefits that each application will provide by 2026. (Specifically for orthopedic surgery). As earlier mentioned, there are numerous AI use cases in the healthcare space but I have classified these use cases around typical processes common in the sector. Although this may not be very comprehensive, it will certainly provide insights regarding AI activities and use cases in healthcare. I must quickly add that things are fast improving and some of the things I will be mentioning here might have been improved before you read.
1. Medical Imaging and Diagnostic Presently, IBM's Watson for Health is proving to be extremely helpful as it is assisting healthcare firms to apply cognitive technology to access large amounts of health data and power diagnosis. Interestingly, IBM's Watson is capable of reviewing and storing more medical information which includes every symptom, case study and medical journal of treatment and response globally. In fact, it can handle this task exponentially faster than any individual can achieve. In a bid to resolve real-world healthcare challenges, Google's DeepMind Health is partnering with researchers, patients and clinicians around the world. DeepMind Health leverages the combination of machine learning and systems neuroscience to incorporate extremely powerful algorithms into neural networks that imitate our brains. AI and Early Detection Did you know that some healthcare institutions are already using AI to detect more accurately certain diseases like cancer, especially in their early stages? Well, the information obtained from the American Cancer Society is a confirmation that cancer is being detected in women using AI. Utilizing an AI enabling review as well as translation of mammograms is actually 30 times faster when trying to detect cancer. It is not just faster but also provides 99 percent accuracy which further lowers the need for biopsies that are unnecessary. The increased production of various consumer wearables is proving to be helpful in healthcare. Recent trends show that we can now combine the data from such wearables with AI to monitor early-stage heart disease. This is an excellent service for doctors as well as caregivers because, with such a combination, they can effectively monitor and detect potential life-threatening episodes at more treatable phases before things get worse. An excellent example of how medical workers are using an AI system for early diagnosis is Ezra. It takes advantage of AI in analyzing full-body MRI scans to assist clinicians in detecting cancer during its early phases. AI has also proven to be extremely useful in diagnosing cases of COVID-19 and also identifying patients that need ventilator support. A good example of a company already using it is Huiying Medical, a Chinese company. The company created an AI-enabled medical imaging solution that boasts 96 percent accuracy. 2. AI and Patient Care Patient care is one aspect of the healthcare system that AI systems will transform. There are several ways AI will play a vital role in providing care for patients. Assisted or Automated Diagnosis and Prescription Patients can actually self-diagnose or help doctors in diagnosis by using chatbots. For
instance, Babylon health offers relevant health and triage data based on the kind of symptoms a patient describes. But they also clearly declare that they are not offering any form of diagnosis. Of course, one reason why they make such disclaimer is to reduce their legal liabilities. However, we are likely going to witness more chatbots providing diagnoses in the future, especially with improvements in their rate of accuracy. AI can also help in pregnancy management by monitoring both mother and fetus. With this feature, mothers will not have to worry much about their babies. Also, it will enable the early diagnosis where a health issue exists and with AI, prescription auditing is possible. AI audit systems can help to significantly lower cases of prescription errors. AI-Enabled Real-Time Prioritization and triage AI can carry out prescriptive analytics on available patient information to ensure accurate real-time case prioritization and triage. 3. Research & Development (Clinical Trials) The existing process of research and development for clinical trials is undoubtedly and an expensive one. The cost of phase two clinical trials as estimated by experts in the field is believed to be between $7 million and $20 million. Averagely, the cost of phase three also exceeds $52 million but that is not all as other factors further delay the release of the drug into the market. One of such factors is the time required for regulatory approval and the eventual release of the drug into the market. The average user of these medications may not be aware of the time and cost of research and development. The California Biomedical Research Association actually revealed that it takes an average of 12 hours for a drug to travel from the research lab to patients. Well, just five out of 5,000 drugs that begin preclinical testing can get to the phase of human testing and just one out of the five drugs will eventually receive approval for human usage. This also highlights the reason why stakeholders in the healthcare industry are interested in what AI has to offer the industry, especially in terms of cost savings. When it comes to how scientists are researching and developing new drugs, the pharma and biotech industries are becoming more interested in how to refine their activities with AI. As disclosed by Adobe, the US economy alone can save over $150 billion annually by leveraging clinical health applications. This also explains why over 60 percent of companies now make AI an essential aspect of their innovative strategy. By leveraging the latest advances made in AI, it would be possible to streamline the drug discovery and repurposing processes and this will help to cut down not just the cost of drugs but the time required for making new ones.
DIFFERENT PHASES OF AI SCALING IN HEALTHCARE
Figure 16: AI scaling in healthcare
Just as I mentioned before, we are just scratching the surface when exploring the use cases of AI in healthcare. We are yet to have a deeper understanding of the full potential of AI in healthcare, especially when it comes to personalization. However, based on the results of surveys and interviews, there are about three major phases of scaling AI in healthcare based on the existing solutions as well as ideas. Phase I Repetitive and Administrative Tasks The first phase of solutions we are already seeing is mainly focused on dealing with routine issues. These AI solutions are focusing on repetitive and mainly administrative tasks that often take up a large chunk of time for nurses and doctors. These applications are working toward optimizing healthcare operations and further increasing adoption. This stage of AI scaling includes applications on imaging that are already being used in ophthalmology, radiology and pathology. I strongly believe that we are in the first phase as we are already witnessing these applications of AI like some of the examples I already provided. Phase II Home-Based Care During the second phase of AI scaling, we are likely going to see more AI solutions that
will enable the transition from hospital-based care to home-based care. This includes virtual assistants, AI-powered alerting systems, and remote monitoring even as patients become more involved in their care. We may see a broader use of NLP solutions during this phase both in home and hospital settings. Also, during the second phase, there will be an increased use of AI in a broader number of specialties like neurology, oncology or cardiology – these are areas where significant progress has already been made. Of course, this implies that AI will be increasingly embedded in clinical workflows via intensive engagement with professional providers. In order to use existing technologies in new contexts, there would also be a need for well-designed and integrated solutions. Some of the factors that will propel the scaling up of AI deployment include capability building in organizations, the combination of technological advancements (such as connectivity, NLP, deep learning and several others) as well as a cultural change. Phase III AI in Clinical Practice During the third phase of the scaling of AI solutions, we are likely going to see an increase in AI solutions in clinical practice. Of course, this will be based on evidence obtained from clinical trials with more focus placed on improved and scaled clinical decision-support (CDS) tools in an industry that has actually learned several lessons from previous attempts made to introduce similar tools into clinical practice and has been able to adapt its skills, mindset and culture. During the third phase, Ai will likely become an integral aspect of the entire healthcare value chain. So, AI will play a key role in how we learn, investigate and eventually deliver care. It will also be n integral part of how we improve the health of populations. In European healthcare, some of the crucial preconditions for AI to effectively deliver at its best will be strong governance that will continuously improve the quality of data, the integration of broader data sets across various organizations and an increased level of confidence from practitioners, patients, and organizations not just in most AI solutions but also in its ability to deal with associated risks.
Factors Needed for Introducing and Scaling AI in Healthcare The journey so far has been encouraging as we are seeing an increase in the use of AI in healthcare. We are gradually shifting to a world where AI can offer global, consistent and remarkable improvements in care, but this journey is quite challenging. I must quickly add here that AI is not the solution to all challenges of healthcare systems. To ensure the successful integration of AI in healthcare, there are some issues that need to be resolved and things that should be put in place. So, let's take a look at some of these factors. Quality AI in Healthcare One of the areas that all stakeholders believe is essential for enjoying the benefits of AI centers around issues such as ease of use, the robustness and completeness of underlying data, poor choice of use cases, the quality as well as performance of
algorithms and AI design. Among the common barriers to dealing with the issue of quality and increased adoption of AI solutions include the limited iteration by healthcare and AI teams, the absence of multidisciplinary development, and getting healthcare staff involved at an early stage. One of the challenges AI solutions will need to deal with is that of establishing the clinical evidence of effectiveness and quality. As startups are focusing on quickly scaling available solutions, it is crucial for healthcare practitioners to have evidence that such new ideas will not in any way cause harm before getting close to a patient. It is also crucial for practitioners to understand how an AI system works, the source of its underlying data, and possible biases that could have been embedded in the algorithms of a solution. The key to scaling AI in healthcare is transparency and collaboration between practitioners and innovators. Designers of AI solutions must have the end-user at heart to ensure that it fits seamlessly with the workflow of decision-makers. Also, when such solutions are used, then improvements can be made as users provide feedback. Another key driver of adoption is the ability of AI research to emphasize causal, explainable and ethical AI. Training The truth is that majority of practitioners in the healthcare industry lack adequate digital skills. The adoption of AI systems in healthcare will require leaders to be knowledgeable in biomedical and data science. This also explains why schools are training students in the kind of science where biology, informatics and medicine meet via joint degrees. There is a need for skills like AI, the fundamentals of genomics, digital literacy and machine learning to become mainstream for every practitioner. These skills will also be supplemented by the development of a continuous learning mindset as well as critical thinking skills. What about the existing workforce? They need to also be trained via ongoing learning while practitioners need to be incentivized to keep learning.
Chapter 5: AI in Transportation Key Takeaway Flying cars are already in existence and their creation will certainly change the way we commute, live, work and do so many things in the coming decades all thanks to AI. Google's self-driving cars have already recorded over 1,500,000 miles and have successfully driven 300,000 miles without an accident. Although these cars have full self-driving capabilities, they still need human intervention especially in complex situations like traffic jams. Autonomous vehicles are likely going to lower the rate of accidents and deaths since they are safer than human-driven vehicles. The adoption of self-driving cars will not just be limited to personal transportation; we will eventually see the production of flying vehicles, remotely controlled delivery vehicles as well as trucks in the future.
One of the first domains where we may likely be asked to trust the safety and reliability of an AI system to execute a critical task is transportation. Recent trends and advancements in AI development are strongs indication that autonomous transportation will soon become commonplace. In fact, the first experience most of us will have with physically embodied AI systems will influence our perception of artificial intelligence in tremendous ways. Experts believe that when the physical hardware becomes robust and safe enough for us, we will likely witness a sudden introduction of AI systems in transport as a surprise to the public. However, this entire process will require time to enable developers make the needed adjustment. Once cars become better drivers than humans, then more city-dwellers will begin to own fewer cars, spend their time differently and live further away from their workplace. This will in turn create a completely new urban organization and most of the expected changes will not just be limited to cars. Have you ever come across videos and images showing how some companies are testing flying cars? Well, most of us watched the Blade Runner film which originally took place in an imagined Los Angeles of 2019. We saw flying cars that moved along aerial highways back in 1982, but ever since the emergence of the film, technology has advanced in amazing ways that Hollywood might never have been able to predict. We have seen things like murder drones, hashtag politics, selfie sticks, and several others. But hovercraft taxis appeared to be one far-off fantasy that was only meant for theme park rides and science-fiction novels. But if you feel you are dreaming when you heard that flying cars are real, you need to think again! Flying cars are already in existence and their creation will certainly change the way we commute, live, work and do so many things in the coming decades all thanks to AI. We now have several companies competing to develop flying motorbikes, jetpacks as well as personal air taxis. Actually, aviation and auto companies, as well as venture capitalists (including the ambitious Uber Elevate from the rideshare company, Uber), are all staking claims on this emerging industry which experts believe may be worth about $1.5 trillion by 2040. In fact, the German-based Volocopter has already marketed its VoloCity craft which is the first commercially licensed air taxi that's electrically powered. The company believes that the vehicle will later run without the need for a pilot. What this implies is that it will be piloted purely by an AI system. We have also seen how SkyDrive, a Japanese startup collaborated with Toyota to carry out a test flight of an allelectric air taxi. It is believed that their air taxi is the smallest electric vehicle the world has ever seen that is capable of taking off and landing from a vertical position. In fact, the company has succeeded in flying its SD-03 craft for some minutes around an airfield. While there is an increasing demand by consumers, we still face the challenge of how to deal with traffic. We shall discuss more on advances in flying cars later in this section. But it is crucial to point out that a few technologies have been able to enhance the increased adoption of AI in transportation. When we compare the present scale as well
as diversity of data regarding both population-level and personal transportation we have presently to what was available in 2000, you will agree with me that it is outstanding. This level of data was made available due to factors such as the adoption of smartphones, a significant reduction in costs as well as a remarkable improvement in the accuracy of a variety of sensors. We have earlier seen the importance of big data in the development of AI and without such large amounts of data and connectivity, we may never be able to develop applications that can accurately predict traffic, provide realtime sensing, route calculation, self-driving cars, and even offer peer-to-peer ridesharing. To have a better understanding of the impact of AI on transportation, let's examine how its use has progressed over the years in transportation.
The Era of Smarter Cars Back in 2001, personal cars were provided with their GPS as they had in-car navigation devices. Have you ever imagined what transportation would feel like without GPS? Well, they are now a fundamental part of the existing transportation infrastructure. GPS does not only assist drivers as they navigate different cities, it also provides large-scale information regarding transportation patterns to tech firms as well as cities. Another factor that has further increased the amount of location data available is the widespread adoption of smartphones that are equipped with GPS technology which helped to increase connectivity among individuals. Apart from GPS, newly manufactured vehicles now have a wide range of sensing capabilities. It has actually been predicted that the average vehicle in the United State possesses seventy sensors such as moisture sensors, accelerometers, gyroscopes, and ambient light sensors. Sensors have been an essential part of vehicles even before 2000 as automobiles had sensors that provide information regarding a vehicle's internal state like its acceleration, wheel position and speed. There were also several functionalities that are now redered courtesy of the combination of data from real-time sensing with perception as well as decision making. Examples include airbag control, Anti-lock Braking System (ABS), Electronic Stability Control (ESC) and Traction Control Systems (TCS).
Figure 17: Firms with over 50 patent filings for AVs 2011-2016 – Bloomberg Business Week
The interesting thing about these functionalities is that they help drivers and, in some cases, completely take over some specific activities to provide safety and comfort for drivers. Things have further improved as new cars now have the ability to carry out certain tasks that only humans performed before. For instance, some cars can now steer themselves in the course of stop-and-go traffic, carry out adaptive cruise control on highways, park themselves, and even alert drivers of certain objects in blind spots as they change lanes. All these new functionalities are made possible courtesy of AI systems. Also, to develop pre-collision systems that enable vehicles to autonomously brake in the event that a collision is detected, car manufacturers leveraged vision and radar technology. Manufacturers also applied deep learning to help improve the capacity of automobiles to recognize sound and detect objects in the environment. Self-Driving Vehicles Most of us were already exposed to the idea that there was something like self-driving cars as far back as the 1930s. Well, science fiction writers already dreamed of days when cars would drive without humans in charge. However, since the 1960s, bringing the wild dreams of science fiction into reality was a great challenge for the AI community. Well, this challenge before the AI community was surmounted in the 2000s as they were able to develop autonomous vehicles in the sea and sky as well as in
mars. Well, self-driving cars at this point only existed as research prototypes in different labs around the world. One of the reasons for the delay has to do with the fact that driving in a city is believed to be a complex problem for automation. This was primarily due to several factors such as heavy traffic, pedestrians, as well as other unexpected events that could take place outside the control of an automobile. It is interesting to note that the technological components needed to manufacture autonomous driving cars were actually available in 2000; in fact, some autonomous car prototypes were already in existence back in 2000. Few persons predicted that by 2015, we would be seeing companies that manufacture autonomous cars, especially with the failed attempt made by research teams to complete a challenge in limited desert setting during a “grand challenge” on autonomous driving back in 2004 hosted by the first Defense Advanced Research Projects Agency's. However, within eight years, things started happing at a rapid pace both in industry and academia. Some of the factors that were responsible for the increased pace of progress made within those eight years (from 2004-2012) were advances in sensing technology as well as machine learning for perception tasks. Now, we can see Google's self-driving cars as well as Tesla's semi-autonomous cars on our streets. It is interesting to note that Google's self-driving cars have already recorded over 1,500,000 miles and have successfully driven 300,000 miles without an accident. These cars are completely autonomous and do not require any form of human input. Also, Tesla has come out with a software update for self-driving capability to their existing cars. These cars from Tesla are semi-autonomous since human drivers are required to remain engaged and take over the affairs of driving the car if they identify a potential problem. We are undoubtedly going to see cars with superhuman performance courtesy of sensing algorithms. Already, developers have developed automated perception which includes vision at near or human-level performance. This will enable them perform welldefined tasks like recognition and tracking. The adoption of self-driving cars will not just be limited to personal transportation; we will eventually see the production of flying vehicles, remotely controlled delivery vehicles as well as trucks in the future.
THE CURRENT STATE OF THE WORLD OF AUTONOMOUS VEHICLES Autonomous vehicles are increasingly becoming part of our roads. Waymo has successfully executed trials of autonomous taxis in the United States and precisely, in California. It has been able to transport more than 6,200 individuals within its first month and ever since then, it has already moved thousands of people. In the United States, Walmart has also recorded significant progress in this space as they have commenced the usage of cargo vans for the delivery of groceries in Arizona. The partnership between Pizza Hut and Toyota is also focused on creating driverless
electric vehicles that come with a mobile kitchen meant to prepare pizza as it moves toward the home of the buyer. Let’s take a closer look at some other startups in the autonomous vehicle space.
Start-ups in Autonomous Vehicle Technology Space Whether in public transportation, personal needs or ride-sharing, many companies are leveraging AI to create autonomous vehicles. Apart from the big names such as Google and Tesla in this space, there are other startups already making significant progress in this industry. Nutonomy Located in Boston, Massachusetts, this firm is developing autonomous technology for cars that will operate completely driverless. The company's technology, nCore enables flexible human-like car control without errors. With their software, it would be possible for cars to navigate even in extremely complex traffic conditions. In a bid to test their cars in Boston's Seaport District, the company partnered with Lyft. They now provide rides to Lyft users and are already increasing their capability to transform how people move around. Waymo The company's 360-Degree Perception technology is also one of the top contenders in the self-driving car domain. Although Waymo started as Google's exploration of selfdriving cars, it is now an independent company that is focused on manufacturing driverless cars that can convey passengers from one point to another safely. Presently, their vehicles have covered over eight million autonomous miles. Its perception technology is capable of detecting cyclists, pedestrians, other vehicles, road work as well as every other obstacle up to 300 yards away. Already, Waymo now provides test rides in the Phoenix metro area and people can actually apply for such rides. Optimus Ride This is another player in the self-driving car space as they manufacture autonomous cars that can operate in geo-fenced areas. Their smart electric cars are designed to foster more sustainable and efficient cities and eventually freeing up parking while cutting down the number of cars on the road. The company has already obtained the green light to test its autonomous cars in towns and cities in Massachusetts. However, their cars are not meant for long distances but for moving passengers locally.
THE FUTURE OF AUTONOMOUS VEHICLES The Society of Automotive Engineers (SAE International) developed the current standards for autonomous driving. Based on the SAE standard, a vehicle's automation capability ranges from the lowest level which is 0 for cars with no autonomy at all to level 5. As you would have guessed, level 5 is for cars with full automation. Vehicles
that fall within the level one category have features such as lane assist, adaptive cruise control, and park assist.
Figure 18: SAE standard for autonomous vehicles
Experts believe that cars that fall within level three may be fully available in 2021. Although these cars have full self-driving capabilities, they still need human intervention especially in complex situations like traffic jams. It is also crucial to point out that cars that require no human intervention whatsoever have not received any approval for use on regular roads but some level 4 and 5 cars such as Google's Waymo are already being developed and tested. The truth is that AVs will make a significant impact in our lives, but their development requires time. We may have to wait for some years before they will eventually become commercially available. For instance, according to Chris Urmson, head of Google's Waymo autonomous vehicle unit and a DARPA challenge winner, we may wait for about 30 to 50 years before we can witness the mass adoption of autonomous vehicles. Just remember that there is a difference between the availability of a technology and the mass adoption of the same technology. Although a technology may be available, it may lack widespread adoption due to factors such as the price point that users can pay or regulations that would control its use.
The Impact of AVs on Economy and Society According to a book published by the Brookings Institution, the US Energy Information Administration (EIA) is of the view that such vehicles will lead to fewer emissions since they are more fuel-efficient than the cars driven by humans. Here are some of the positive impacts of AVs on the economy and society: They are likely going to lower the rate of accidents and deaths since they are safer than human-driven vehicles. This is due to a drastic drop in human errors which actually accounts for about 94 percent of crashes on the roads. Although consumers are yet to trust driverless cars, this will eventually change as we see the introduction of level four and five cars. They may also help to cut down on the cost associated with labor for
driving-intensive industries. Increased levels of autonomy have the potential to cut down on dangerous behaviors of drivers. In fact, experts are of the view that the greatest benefit from the increased use of self-driving vehicles may be associated with the reduction in the devastation caused by unbelted vehicle occupants, impaired driving, speeding, drugged driving and distraction. With AVs, individuals with disabilities such as the blind will be capable of self-sufficiency and with automated vehicles, people can live the kind of life they desire. Autonomous vehicles can enhance independence for seniors while those used in ride-sharing can help to lower the cost of personal transportation – leading to lowered cost of mobility. When it comes to costs, AVs can help to significantly lower the cost associated with crashes such as the cost of vehicle repairs, medical bills and lost work time. With a reduction in the number of crashes, the cost of insurance may also be reduced significantly. With AVs, we may no longer require the services of traffic enforcement personnel since self-driving cars will not be violating most of the traffic signs. The use of autonomous vehicles will help boost productivity since all occupants in the vehicle can safely engage in other activities such as responding to their emails or even watching a movie. In terms of the impact of AVs on our environment, with fewer traffic jams, there will be a reduction in fuel consumption leading to lower greenhouse gasses. I need to add that even though AVs have the potential to provide us with exciting benefits, they also have some challenges. One obvious one is the fact that it may lead to the displacement of many drivers which will mean the loss of jobs. Also, if AVs are used for local delivery, there are some tasks associated with such jobs such as paperwork, customer service, guarding the freight and several others.
Chapter 6: Financial Services &Artificial Intelligence Key Takeaway AI can help to automate most processes such as customer service and engagement to help cut down costs for fintechs. Available data from Business Insider Intelligence report on AI in banking shows that 80 percent of banks already know of AI's potential benefits. AI-powered systems now provide banks a more efficient alternative for dealing with some of the issues they face in the current financial systems. AI can help financial institutions to prevent fraud even before it occurs instead of the traditional reactive approach that only identifies cases of fraud after it has taken place. The leading adopters of AI and machine learning technologies in financial services are investment banking companies.
The truth is that the financial services industry has entered the AI phase of what some experts call a digital marathon. Well, the journey of integrating AI applications into financial services started after the internet was introduced and has so far taken several financial service providers through different phases of digitalization. AI is already disrupting several industries as we have already seen in past chapters and it is gradually breaking down the bonds that initially held several components of traditional financial establishments. Presently, in the financial sector, AI is increasingly opening doors to innovations and new operating models. As more intelligent machines that can perform tasks that only humans performed before are increasingly introduced into the finance industry, AI applications are now becoming an essential aspect of technology in the financial services, banking and insurance industry. We are seeing a dramatic change in the way financial institutions are offering new products and services courtesy of AI. Available data from the Business Insider Intelligence report on AI in Banking shows that 80 percent of banks already know of AI's potential benefits. Considering the fact that by 2023, AI applications have the potential to save banks about $447 billion in costs, more banks are now exploring new strategies for incorporating AI systems into their services. One of the sectors in the finance industry that is increasingly leveraging AI is the banking sector. The quality of products and services being provided by the banking industry is changing courtesy of AI applications. AI systems have not just helped banks to provide better methods of handling data but have also significantly improved customer experience, redefined, sped up and simplified traditional banking processes making them more efficient than before. The introduction of several technologies like the internet, smartphone devices, and AI has made data a more valuable asset, especially in financial organizations. This explains why banks and other financial institutions are increasingly becoming more conscious of innovative and cost-efficient solutions being provided by AI. They are now aware that even though asset size is crucial, it is no longer enough when it comes to running a successful business. The primary indicator of how successful banks, financial services and insurance companies (BFSI) are is their ability to leverage the power of new technology like IoT, AI, blockchain, and several others to harness their data and create personalized and innovative products and services.
WHY AI IN FINANCE INDUSTRY? Part of the reason why AI applications are required especially in emerging markets has to do with the fact that businesses and individuals are usually underserved since they often lack the traditional collateral, identification, credit history or all three of them – which is essential for accessing various financial services. By helping to analyze quality alternatives to help ascertain the creditworthiness and identity of individuals and businesses using alternative data obtained from satellites, smartphones and other sources, AI can help resolve this issue.
One other obstacle that emerging market customers face while trying to access financial services has to do with cost. Cost in this regard refers to the cost of reaching and serving most of these customers which is usually high relative to the volume of their financial transactions as well as the revenue they represent. This is where AI comes in as it has what it takes to resolve this problem. AI can help to automate most processes such as customer service and engagement to help cut down costs for fintechs. This will in turn lead to a higher volume of low-value transactions and gradually transform those that were initially the underserved individuals and businesses into potentially profitable clients. If emerging market financial service providers (FSPs) can extend their services to underfunded businesses and underserved persons courtesy of the use of AI systems, then AI has the potential to foster financial inclusion. However, it is equally important to add that the extent and pace of adoption of AI by financial service providers and how well the inclusion benefits are realized greatly depend on the efforts made by businesses, investors and governments in generating market and institutional framework that will facilitate the sustainable and responsible integration of AI into financial services. This framework or setting include: FSPs dealing with algorithmic bias and error Building trust via responsible lending Striving for informed consent in the usage of consumer data Managing cyber risk It is equally important for the relevant government authorities to create a competitive atmosphere for financial services. AI in Finance and Its Benefits The implementation of AI in finance provides several benefits both for delivering personalized recommendations, fraud detection and task automation. Its use cases cut across the front and middle office and with time, AI will undoubtedly transform the finance industry. Here are some striking benefits of AI in finance.
Figure 19: Benefits of AI in finance
FACTORS DRIVING AI DISRUPTION IN THE FINANCE INDUSTRY Over the past decade, factors such as the significant drop in the cost of Internet connectivity, increased computing, and an increase in mobile device penetration have enabled businesses and digital consumers to generate large amounts of data via smartphones and other devices. Also, advances in computing power, analytic techniques and energy reliability have made it cost-effective for businesses to explore this real-time and alternative data. Just like every other sector where AI is making a tremendous impact, there are several factors that are the key drivers of AI disruption in banking.
Figure 20: Factors driving AI adoption in banking
1. The Explosion of Big Data Just like the case of AI and the healthcare industry, the explosion of the big data market has an immense impact on the banking industry, especially as customer expectations continue to change. It is common for customers to communicate with their banks on various digital platforms. Apart from the traditional structured data which most organizations collect, they are also obtaining more unstructured data such as voice messages, videos, emails, images and text through various channels such as social media platforms and their customer service. By taking advantage of big data, banks are now having a 360 view of their customer's interaction with the brand. They are now leveraging information from transaction history, social media interactions, and basic personal data in their decision-making processes. 2. Regulatory Requirements Of course, one of the hurdles that banks are faced with is intense scrutiny from
regulators. To meet up with their regulatory obligations, they are expected to provide accurate reports on time. The process of complying with regulatory requirements requires the collection of data from different sources. AI-powered systems now provide banks a more efficient alternative for dealing with some of the issues they face in the current financial systems. This is achieved through the automation of the entire data collection process as well as a drastic improvement in the speed as well as the quality of decisions. Eventually, the use of AI-driven solutions will enhance the readiness of most banks to satisfy their regulatory obligations. Experts strongly believe that the continued development of AI will result in a radical transformation both from the front and back end of their operations. I need to add that the expansion of AI to different areas of operation will require banks and other financial organizations to make some adjustments and changes to the existing old structures of global financial markets. This shift will end up creating an opportunity for compliance teams to be strategic in their investment in new technologies and help financial institutions become more future-ready. 3. Improved Infrastructure This includes hardware, Cloud, faster computers, software, etc. Interestingly, the recent explosion of cloud technology in addition to the availability of computational resources has increased the speed of processing large data more efficiently and at lower costs. This is perhaps one major reason why organizations and financial service providers are taking advantage of AI more than ever. 4. Competition Recently, banks are faced with strong competition from FinTechs as they offer excellent services to their customers. One key differentiator in the financial industry is technology even as more organizations in this space are leveraging cutting-edge technologies to access the vast amount of data available. The implication is that banks now make use of AI in optimizing current service offerings and even showcase new offerings to their clients. Financial institutions are providing their customers a more personalized experience courtesy of new technologies like AI and blockchain technology. These are some of the factors that have made it more commercially viable for financial service providers (FSPs) to increasingly integrate AI applications into their various services. In fact, to further confirm the increasing interest in AI, the World Economic Forum (WEF) and the Cambridge Centre for Alternative Finance jointly surveyed 151 organizations which included incumbent banks and financial technology firms (Fintech). The result of the survey revealed that indeed more firms are integrating AI applications into the different services they are offering their clients. According to the survey, a total of 85 percent of correspondents agree that they are already utilizing some kind of AI in their services. You should bear in mind that the factors we just discussed are always evolving and introducing new opportunities and values to businesses in a bid to effectively leverage
the benefits that AI provides. The banking, financial services and insurance market are undoubtedly in the best position to partake in this disruption and move forward in the course of the global digital transformation.
COMMON AI APPLICATIONS IN THE FINANCE INDUSTRY The finance industry and especially the banking sector is already taking advantage of the benefits of this disruptive technology in different areas of banking services. Here are some excellent examples of AI use cases already existing in the sector.
Fraud Detection and Prevention Previously, banks had depended mainly on conventional rule-based method Anti-Money laundering (AML) transaction monitoring and name screening methods. Unfortunately,
this system has always resulted in a high rate of false positives. But with the troubling spike in the cases of fraud-related crimes as well as the dynamic nature of fraud patterns, financial institutions are now adding enhanced AI components to their existing systems. This is to assist in their effort toward identifying transaction patterns, suspicious relationships between entities and individuals and data anomalies that were previously undetected. The integration of enhanced AI components is definitely a more proactive strategy as AI can help financial institutions to prevent fraud even before it occurs instead of the traditional reactive approach that only identifies a fraudulent activity after it has taken place. JPMorgan Chase is one bank that is at the forefront of the banks that are already taking advantage of AI in consumer finance. Consumer banking actually represents more than 50 percent of the bank's net income. This is also the primary reason why the bank is using key fraud detection applications to protect its account holders. JPMorgan has already implemented a proprietary algorithm that helps detect fraud patterns. Whenever a credit card transaction is processed, the transaction details will instantly be sent to central computers in the bank's data centers. This is where the credibility of the transaction is ascertained – whether it is valid or fraudulent. The truth is that what really bolstered the bank's high scores in reliability and security (in the Insider Intelligence's 2020 US Banking Digital Trust survey) is the use of AI. Chatbots More financial institutions like banks are now using AI-enabled chatbots that are incorporated with Natural Language Processing (NLP) to not just engage with their customers but also interact effectively with them 24/7 and significantly improve online conversations. Apart from the normal responses that customers get to their questions to enable them handle issues relating to their account details, chatbots can now perform other duties. For instance, chatbots now assist customers in channeling their complaints to the right customer service unit amongst several units and can even assist in the opening of new accounts. AI and Predictive Analytics The emergence of AI and machine learning has indeed made accurate forecasting and prediction possible. In terms of revenue forecasting, risk monitoring, case management, and stock price predictions, data analytics and AI are now proving to be extremely useful. In fact, the dramatic increase in the volume of data collected has played a remarkable role in enhancing the level of performance of the models leading to a slow drop in the level of human intervention needed. Customer Relationship Management (CRM) An extremely crucial factor for banks and other financial institutions is the quality of their customer relationship management. This also explains why many BFSIs are offering
more personalized 24/7 services to individual clients like offering voice commands and facial recognition features when they want to log into financial apps on their smart devices. That's not all, financial institutions are now taking advantage of AI while analyzing customer behavioral patterns. This helps them to automatically carry out customer segmentation which creates the opportunity for targeted marketing as well as enhanced customer experience and interaction. Credit Risk Management While regulators are making efforts to improve on risk management supervision, financial institutions are required to come up with solutions that are more reliable. One solution for credit risk management that is increasingly becoming popular, especially in the digital banking and Fintech market is the use of AI in performing this task. Fintechs and banks are now leveraging AI to ascertain the creditworthiness of any facility borrower by harnessing data to predict the possibility of a customer defaulting. This is an excellent way to improve their level of accuracy in making credit decisions. Already the lending market is increasingly becoming more insights-driven instead of judgments made by “credit experts” in a bid to drastically increase the rejection of highrisk customers while also minimizing the rejection of customers that are indeed creditworthy. This will eventually translate into a dramatic drop in the credit losses that most financial institutions experience.
SO, WHAT IS THE STATE OF AI ADOPTION IN FINANCIAL SERVICES? I must quickly add that things are changing fast and some of the data provided here might have become old before you read. However, it should give you a picture of the way things stand in the rate of adoption of AI in the financial services industry. Here are key facts about AI and the financial services industry: Mordor Intelligence predicts that in 2025, the global AI fintech market will hit $22.6 billion and between 2020 and 2025, it will achieve a Compound Annual Growth Rate (CAGR) of 23.37%. According to predictions made by IDC, global revenues for AI hardware, software and services will hit $156.5 billion in 2020 which represents an increase of 12.3% over 2019. According to a survey by Deloitte Insights, about 70 percent of all financial services firms are already making use of machine learning to fine-tune credit scores, make predictions on cash flow events and even detect fraud. The Latest Economist Intelligence Unit adoption study revealed that 54% of financial services firms having over 5,000 workers have already embraced AI. One thing that is so far proving to be the greatest challenge facing the financial services sector has to do with their ability to strengthen customer relationships by offering
innovative new services that will not only protect the health of everyone involved but also save everyone valuable time. In fact, records show that 60 percent of banks have either closed or cut down the opening hours of some of their branches and at the same time, fast-tracking new digital features. According to Deloitte Digital's “Digital Banking Maturity 2020 Report,” here is how banks are fast-tracking new digital features;
Figure 21: Deloitte Digital Report
Fast-tracking contactless digital support on different channels further helps to generate large volumes of data on a daily basis. The data collected is also essential for further training supervised machine learning algorithms. Also, the terabytes of data collected are essential for unsupervised machine learning algorithms as they discover patterns in financial services data that were previously unknown. Undoubtedly, AI is fast becoming a new engine of growth as it offers financial institutions essential insights and intelligence in uncertain times.
Areas where AI is Recording Increased Adoption in Financial Services
There is a remarkable increase in the adoption of AI and machine learning by financial services firms to leverage the data obtained from new digitally propelled channels. Records from an Economist Intelligence Unit (EIU) research discovered that about 86 percent of financial services executives already have a blueprint for increasing their AIrelated investments till 2025. The title of the EIU study is “The Road Ahead: Artificial Intelligence and the Future of Financial Services.” The study actually analyzed the sentiments of 200 business executives as well as C-suite leaders at retail banks, insurance firms and investment banks in Europe, Asia-Pacific and North America. Here are some of the striking points from the results of the survey showing the state of AI adoption in financial services. The leading adopters of AI and machine learning technologies in financial services are investment banking companies. They were closely followed by retail. Of course, the operations of investment banking depend on machine learning to help them further improve their algorithms as well as prediction models to help quantify and cut down on risk as I earlier mentioned. On the part of retailers, they depend more on predictive analytics to help them discover new insights that are capable of helping in customer retention and also move them from brick and mortar to existing digital channels. The graph below shows where AI is most used in financial services.
Figure 22: Economist Intelligence Unit Study
Also, 37 percent of financial services organizations globally embrace AI in a bid to cut down their operational costs. This is closely followed by predictive analytics to help in enhancing their decisions while scaling up the capacity of their employees to handle volume-based jobs. When it comes to the usage of AI for enhanced customer personalized service as well as customer satisfaction, North America is leading other regions included in the study. The research team also discovered that 36 percent of those regarded as heavy adopters of AI and machine learning enjoyed one remarkable benefit which is a more efficient product and marketing services and this view was actually shared by 23 percent of light adopters. North American financial services firms also led all other regions included in the survey by a wide margin as 33 percent of the firms in that region predicted that AI will transform how they innovate. Also, the survey revealed that North American firms were more optimistic about AI's ability to help them release new products and services with a record of 31 percent. Financial services executives from APAC and North America also perceived the highest level of opportunity to move into new markets (recording 30 percent and 27 percent each).
This, in the opinion of the Economist Research team, is a strong indication of the higher levels of economic growth in the two regions compared with other regions of the world. In their opinion, it is also a reflection of the level of AI investment from firms to support the growth of their business.
Figure 23: Economist Intelligence Unit Study
Presently, in financial services, the most crucial metric for estimating the level of success of an AI strategy is customer and stakeholder satisfaction. Most AI projects in the pilot and production phase in 2020 were focused on boosting their revenue potential by getting rid of cost and time obstacles. Establishing new digital channels and also ensuring the right customer experience the first time leads to increased focus on stakeholder and customer satisfaction. Based on the research results, here is a graph showing how firms measure the success level of various AI applications.
Figure 24: Economist Intelligence Unit Study
One of the major factors restraining financial services firms from increasing their adoption of AI in different areas of their organization is the high cost of the technology. In fact, presently, cost is one factor that is hindering the increased adoption of AI more than other factors. The second and third factors based on the survey are insufficient infrastructure as well as data quality. These are the two other factors that were mentioned as the challenges facing financial services firms from embracing AI more broadly. According to the results of the survey by the Economist research team, about 80 percent of executives in the financial services sector have a plan to increase their investment in the AI sector in the next five years. The highest views were expressed in APAC which revealed 90 percent desire to make AI-related investment and they were closely followed by North America with 89 percent. It is possible for financial services to deal with the issue of costly constraints that most financial services have been dealing with for several decades simply by investing in AI technologies.
WHAT THE FUTURE OF AI IN FINANCIAL SERVICES LOOKS LIKE Amid the increasing rate of demand for digital offerings as well as the threat of tech-
savvy startups – Fintechs, Firms in the financial services industry are fast embracing digital services and it is predicted that by 2021, global banks' IT budget will jump to $297 billion. In the US for instance, banks' largest addressable consumer groups are millennials and Gen Zers. This also explains why banks and other financial institutions are increasing their AI and IT budgets to meet increasing digital standards. The truth is that most of these young consumers are more comfortable using digital channels. In fact, about 78 percent of millennials reveal that they may never visit any branch of their bank if they can help it. Interestingly, pre-pandemic, there was a gradual migration from the traditional banking channels we all have known to online and mobile banking as a result of the growing opportunity among digitally native consumers. However, this transition was dramatically amplified with the outbreak of the COVID-19 pandemic. The pandemic led to a drastic increase in the processing of stay-at-home orders in different cities around the globe even as consumers explored other self-service alternatives. According to estimates from Insider Intelligence, the adoption of online and mobile banking among US consumers will experience a remarkable increase by 2024 and will hit 72.8 percent and 58.1 percent. This also implies that financial service providers that desire to be successful and competitive in the evolving industry must embrace AI in their operations.
Chapter 7: AI & The Future of Security
Key Takeaway Apart from the increasing talent gap we are already experiencing, it is also obvious that current security analysts lack sufficient time required to detect new threats. AI is also capable of optimizing and monitoring several vital data center processes such as cooling filters, internal temperatures, backup power, bandwidth usage and power consumption. The iPhone's Face ID is one excellent example of the use cases of AI in biometric verifications. Since AI can emulate the best qualities in humans and even omit known human shortcomings, they are capable of handling duplicative cybersecurity processes that are boring to human security workers. AI is a general-purpose, dual-use technology and this means that it can be a blessing or curse for cybersecurity.
One of the fields where AI is proving extremely useful is cybersecurity. A look at a report provided by Norton, a popular antivirus company shows that the cost of recovery for the typical data breach is $3.86 million. The report also disclosed that organizations affected by a data breach will require an average of 196 days to recover from such cases of data breach. This is perhaps one of the reasons why more companies should explore the use of AI in cybersecurity to help prevent financial losses and waste of time. But AI and machine learning can also influence cybersecurity negatively and positively. Stakeholders are currently debating whether AI is good or bad in terms of how it will affect our lives even as more organizations are embracing the use of AI to meet their needs.
CURRENT CHALLENGES FACING CYBERSECURITY There is an increase in the number of challenges facing cybersecurity as attacks are increasingly getting more dangerous even with the advancement in cybersecurity. Some of the challenges include: Reactive Nature of Cybersecurity Most times, the current approach to cybersecurity is a reactive one. Most companies simply try to resolve an issue after it has already taken place. The ability to predict threats long before they take place is a big task facing security experts around the globe. Geographically-Distant IT Systems One factor that makes manual tracking of incidents extremely difficult is geographical distance. To successfully monitor incidents in different locations, cybersecurity experts are required to deal with the differences in infrastructure. Hackers Usually Conceal and Modify their IP Addresses Since hackers often make use of various types of programs such as Tor browsers, proxy servers, virtual private networks (VPN) and several others, hackers are able to operate anonymously and undetected. Manual Threat Hunting The process of manual threat hunting is often time-consuming and expensive and this leads to more unnoticed attacks. Take a look at the global statistics to have a better picture of the state of cyberattacks. About 64 percent of companies globally have encountered a minimum of one form of a cyberattack. Each day, 30,000 websites are being hacked around the world. In 2020, cases of ransomware increased by 150 percent.
Also, in March 2021, there was a total of 20 million breached records. About 94 percent of all malware was spread by email. In fact, each day, 300,000 new pieces of malware are created and this includes Trojans, adware, viruses, keyloggers and several others. Averagely, on a daily basis, about 24,000 malicious mobile apps are blocked on the internet. A new attack is taking place somewhere on the web every 39 seconds.
The Need for AI Security Of course, one of the trends in technology that will witness increased influence and usage in the future is AI security. But the big question now is, why does AI security matter in the first place? Let's find out the reasons why AI is crucial at this point in our lives. The Shrinking Cyber Workforce Human resources is one of the challenges many organizations face, especially as everything is becoming digital. It is projected that in the cybersecurity industry, the talent gap is predicted to increase as there will be about 3.5 million unfilled jobs by the end of 2021. Some people believe that AI machines are capable of filling this gap that is increasing each day. Well, experts in the field share a similar view. They believe that a more scalable solution is the use of AI security tools that have the capacity to augment the workflows of existing workers. This will help to relieve the available resources by reducing the time required for tasks such as threat hunting and alert triage. This will enable cybersecurity employees to focus more of their time on essential tasks that we may not automate via AI. Threats Hunting and AI Security Apart from the increasing talent gap we are already experiencing, it is also obvious that current security analysts lack sufficient time required to detect new threats. According to a recent SANS Institute SOC survey, respondents to the survey disclosed that for threat hunting, they had to depend on time-and resource-intensive methods and this usually leads to alert fatigue. The consequences of finding the required time to detect new threats are often dire: About 73 percent of respondents to the survey revealed that a single alert investigation lasts several hours and, in some cases, days. Also, 53 percent of them disclosed that in a bid to get to the root of an investigation, they had to use three or more data sources. About 53 percent of them revealed that in some cases, critical alerts end up not even being investigated at all. They mentioned that 30 percent of their prioritized alerts are not also investigated. Part of the reason for this is that an event correlation may still be conducted manually
within big data products. One of the benefits of using AI security tools rests on the fact that they are capable of correlating events and even triaging them. This drastically lowers the time required for both incident response and remediation. This idea was also supported by the Capgemini Research Institute's cybersecurity with AI report. According to the report, 64 percent of respondents disclosed that AI cuts down the costs of detecting and responding to breaches. They also added that AI lowers the time for threat detection and breaches by as much as 12 percent.
WAYS AI CAN ENHANCE CYBERSECURITY
The combination of AI, threat intelligence and machine learning can enable organizations to identify patterns in data that are needed by security systems to learn from past experiences. Also, companies can significantly lower incident response time and stay up-to-date with the best practices in security by leveraging AI and machine learning. So, what are the various ways AI can help companies and individuals to improve cybersecurity? Let's find out now. 1. AI and Threat Hunting To identify threats, organizations make use of the traditional security strategy which involves the use of indicators to discover undetected threats. Although this strategy might be okay for previously encountered threats, such techniques often fail to produce the desired result when used for threats that are yet to be identified. Presently signature-based techniques are capable of detecting about 90 percent of threats. However, when companies replace them with AI security systems, the detection rate will further increase to 95 percent, but there will be an increase in the number of false positives.
Experts believe that the best option that will not only help in threat detection but cut down on false positives is the combination of traditional techniques and AI. This combination leads to a 100 percent detection rate and also lowers the rate of false positives. One smart way to enhance the threat hunting process with AI is for companies to integrate behavioral analysis. So, companies can use AI systems to develop profiles of all applications in the network of an organization simply by processing high volumes of endpoint data. 2. Data Centers AI is also capable of optimizing and monitoring several vital data center processes such as cooling filters, internal temperatures, backup power, bandwidth usage and power consumption. AI's calculative strength as well as its ability to continuously monitor these processes also provides insights into the specific values that can enhance the security and effectiveness of infrastructure. Another benefit that AI offers is that it can help to cut down on the cost of maintaining hardware by indicating when you are supposed to maintain the equipment. By providing alerts, you will fix your equipment long before it breaks down and gets worse. According to reports from Google, after the implementation of AI technology in their data centers back in 2016, they recorded a 40 percent drop in cooling costs at their facility as well as a 15 percent drop in power consumption. 3. Vulnerability Management In 2019 alone, a total of 20,362 new vulnerabilities were reported and this represents an increase of 17.8 percent compared to the number of cases in 2018. Presently, many companies are trying to prioritize and manage the huge number of new vulnerabilities they face each day. The strategy for traditional vulnerability management is to wait for hackers to first take advantage of high-risk vulnerabilities before organizations neutralize them. Of course, traditional vulnerability databases are undoubtedly essential for managing and containing the vulnerabilities most organizations already know. Organizations can actually protect themselves even before vulnerabilities are officially identified, reported and patched by using AI and machine learning techniques such as User and Event Behavioral Analytics (UEBA). 4. Network Security There are basically two time-intensive aspects of traditional network security: 1. Establishing security policies These kinds of policies help to identify legitimate network connections and the ones that require further inspection for possible malicious behavior. You can actually enforce a zero-trust model by using these policies. However, the major challenge rests on the creation and maintenance of these security policies considering the large amounts of networks.
2. Having a good knowledge of the network topography of a company In most cases, a good number of companies are not aware of the precise naming conventions for applications and workloads. This implies that security teams of such organizations will need to spend quite some time trying to find out the set of workloads that belong to any particular application. It is possible for organizations to take advantage of AI and enhance network security by understanding network traffic patterns and also recommending security policy and functional grouping of workloads. 5. Securing Authentication There is a constant increase in the number of biometric logins used to create secure logins by palm prints, retinas, or even scanning fingerprints. This can either be used in conjunction with a password or just alone. This is already common among many smartphones. As many global organizations experience heart-wrenching security breaches due to compromised email addresses, passwords and personal information, cybersecurity experts have constantly emphasized that passwords are highly vulnerable to cyberattacks that compromise personal information, social security numbers and even credit card details. This highlights the importance of biometric logins in combination with AI security. The iPhone's Face ID is one excellent example of the use cases of AI in biometric verifications. The infrared sensors on the phone as well as neural engines can detect about 30,000 different reference points on a face and create a vector model of the facial features of a user. All that an AI system needs to do to confirm your identity is to match your face with the stored data. Although biometric verifications serve as a great alternative to passwords, they are still not completely secure. The combination of biometric verifications and AI cybersecurity can assist in getting rid of existing challenges. 6. Handling Duplicative Processes While cyber attackers change their strategy frequently, the basic security best practices often remain the same each day. But as humans, we often get bored when we handle repetitive tasks or even feel complacent and tired. When this happens, they may at some point miss a crucial security task and end up exposing your network. Considering the fact that AI can emulate the best qualities in humans and even omit known human shortcomings, they are capable of handling duplicative cybersecurity processes that are boring to security workers. The use of AI for such tasks will help prevent basic security threats regularly and provide an in-depth analysis of a network to identify possible security leaks that may affect the network negatively. 7. Battling Bots A huge chunk of the existing internet traffic is made up of bots and they can be extremely harmful to businesses. From bogus account creation to account takeovers
with stolen information and even data fraud. In fact, bots can become a serious problem if not handled properly. It is impossible to fight automated threats with just manual responses. By leveraging AI and machine learning, organizations can build a detailed understanding of website traffic and also distinguish between the bad bots, good bots (such as search engine crawlers) and humans. With AI, we can now analyze a wide range of data and this will provide cybersecurity teams the right data sets needed to fine-tune their strategy to the security landscape that is always changing. Businesses can find the right answers to certain security questions by looking at behavioral patterns. Questions like; what does the journey of an average user look like? What does the journey of an average risky user look like? This will help the security team of a business to identify the intent of their website traffic and also stay ahead of the bad guys. 8. Enhanced Endpoint Protection The impact of COVID-19 has led to an increase in the number of people working remotely. AI also has a vital role to play in helping organizations secure the increasing endpoints. Of course, we know that antivirus solutions, as well as VPNs, can assist in fighting against remote malware and ransomware attacks. But these solutions usually function based on signatures and this implies that to remain protected against the latest threats, it is crucial to be up to date with signature definitions. Well, this often turns out to be a serious issue when virus definitions lag due to factors such as the lack of awareness from the software vendor or failure to promptly update the antivirus solution. Unfortunately, signature protection may fail to protect a network against a new form of malware attack. The approach of AI-enabled endpoint protection is quite different from the traditional strategy. This type establishes a baseline of behavior for the endpoint via a repeated training process. An AI system will immediately flag anything that appears unusual and act immediately. The AI can move back to a safe state after a ransomware attack or notify a technician that something is wrong. This is a smart way to provide proactive protection against threats instead of just waiting for signature updates. As we have earlier discussed, AI can help detect several forms of fraud and various malicious activities. The truth is that traditional security systems are unable to keep up with the outrageous number of malware that are created each month which I have earlier disclosed. This is indeed one excellent reason why AI is needed to help deal with this growing problem. Presently, cybersecurity firms are educating AI applications on how they can detect viruses and malware using complex algorithms to ensure that AI systems can easily carry out pattern recognition in software. AI systems can be educated to detect even the tiniest behaviors of malware or ransomware attacks before they enter the system. Once detected, they can isolate them from the specific system. Also, AI systems are capable of using predictive functions that exceed the speed of conventional techniques. If you or your organization is planning to use AI systems, then you must ensure that it is implemented by a qualified cybersecurity firm that understands its functioning, especially if you desire to use AI to its best
capabilities.
DRAWBACKS OF USING AI FOR CYBERSECURITY The increased adoption of AI as a mainstream security tool is still being affected by several limitations. Also, AI security has its risks as I mentioned before. The adoption of AI comes with its risks even as many organizations already adopting AI acknowledge that the most relevant cybersecurity risks are the ones generated by AI. You can see AI as a general-purpose, dual-use technology which implies that it can be a blessing or curse for cybersecurity. Why is AI a curse and blessing to cybersecurity? The reason for this is that AI can serve as a sword (that can support malicious attacks) and at the same time serve as a shield (a tool that can counter cybersecurity risks). Well, there is still another complication to the use of AI in cybersecurity. The use of AI for defensive purposes is still facing several constraints as governments are exploring ways to properly regulate high-risk applications and at the same time ensure the responsible use of AI. But on the attack side of AI, there is an increase in the destructive use of AI and the “attack surface” is fast becoming denser each day. Developing applications is now an expensive venture as costs are on the increase. All these factors and several others are rendering any kind of defense an uphill task. Experts in cybersecurity agree that machine learning and deep learning techniques will make it quite easier and faster for malicious actors to launch sophisticated cyberattacks that will be more destructive and well-targeted. As various organizations use AI systems to detect malware and viruses, malicious actors are also busy using AI to fine-tune their tools and make them AI-proof. The truth is that AI's impact on cybersecurity will certainly expand the threat landscape, modify the typical characteristics of threats and even lead to the creation of new threats. Apart from introducing powerful and new vectors to execute attacks, it is most likely that AI systems will be highly prone to manipulations too. Already, there is a steady increase in the rate of cyberattacks and these hackers are increasingly using AI. The era of the Internet of Things (IoT) will actually make the attack surface denser. What this means is that if companies are to manage the range of technical challenges, resource constraints, and cybersecurity risks, then they “must” leverage AI. The appropriate use of AI by organizations can help to significantly improve the resilience and robustness of a system as long as the right conditions are met. There is a need for collaboration between the technical community, major corporate representatives and policymakers to properly investigate, prevent and mitigate the potential destructive use of AI in cybersecurity. Organizations that are interested in using AI for cybersecurity must invest a lot of funds and time in procuring essential resources such as memory, data, and computing power to enable them build and maintain AI systems.
Since the training of AI models requires learning data sets, it is, therefore, crucial for security teams to acquire a wide range of data sets of malware codes, anomalies and malicious codes. Unfortunately, a good number of companies lack the resources and time needed to get all the accurate data sets. Having a good understanding of these limitations and issues associated with AI and cybersecurity will help you explore the best ways to deal with the associated risks. AI may not be the only cybersecurity solution available for now. So, what is the best approach to dealing with cybersecurity issues? Well, you should try combining the use of AI tools with traditional techniques. Here are a few solutions to help you as you develop your cybersecurity strategy: You need the services of a cybersecurity firm with professionals who possess the right skills and experience in several aspects of cybersecurity. Block malicious links by making use of filters for URLs. This will help prevent links that may have a virus or malware. Keep track of your outgoing traffic and restrict this type of traffic by applying exit filters. Ensure that your cybersecurity team properly tests your systems and networks to detect potential gaps and get them fixed as soon as possible. To protect your systems, you need to install firewalls as well as other malware scanners. Also, ensure you constantly update them to match redesigned malware. To ensure that your systems are healthy and functional, carry out regular audits of software and hardware. Endeavor to frequently review the latest cyber threats and security protocols. This will provide you with information regarding the risks you need to manage first and also help you develop your security protocol accordingly. Apart from having a prevention strategy in place, you should also collaborate with your cyber security team to come up with an excellent recovery strategy. This is because your organization still faces the risk of an attack. While all stakeholders continue to explore the potential of AI to enhance the cyber security profile of organizations, the best approach, for now, is to combine AI and traditional techniques to deal with cyber security risks.
Future of AI and Cybersecurity So, what does the future look like for AI in cybersecurity? First, I must emphasize that despite the drawbacks of the use of AI in cybersecurity, it will be a major aspect of cybersecurity in the future. In fact, security experts strongly believe that the future of cybersecurity depends mainly on AI. Based on the results of a survey carried out by Trend Micro, by the end of 2030, AI will likely replace the need for human beings. This also implies that it will play a vital role in dealing with the issue of a shortfall in cybersecurity skills.
We are going to see more investments in AI by different countries in efforts to improve cybersecurity. One of the trends we may witness in the future is that AI will be a tool for developing data protection strategies for organizations. It will also play a key role in handling challenges associated with cloud technology. The era of people working from any location around the world has come and may be a new norm post-pandemic. Also, the world is increasingly going digital and this highlights the need for online privacy for users and remote workers. AI will most likely be used in the future to identify the source of cyberattacks via NLP applications. Companies can also integrate machine learning into firewalls to help detect unusual activities. In the future, AI will help in improving online privacy for users and this implies that both individuals and corporate organizations will have to add AI concepts to their cybersecurity policies to help enhance the online privacy of their employees working remotely. With AI, we may eventually discard the use of passwords in corporate IT networks of businesses and other organizations as the combination of AI and biometrics mechanisms will serve as a better way to log in.
Chapter 8: AI Application in Criminal Justice
Key Takeaway Researchers have also disclosed that AI systems can serve as an ideal solution to the solitary confinement crisis in the United States. AI networks have been set up in a Chinese correctional facility to help identify and track inmates 24/7 and also inform prison guards if something appears not to be in the right order. New advances in AI now offer judges AI systems that can help them easily analyze the risks of a defendant and help them in decision making. It is crucial to determine whether or not AI should be held to a higher level of transparency than human decision-making. With AI, we can also identify crimes that are already in progress and assist law enforcement to trace those involved.
Generally, most of us, especially in modern democracy assume that our judicial systems sentence people fairly and accurately. However, with AI, we can now ensure that this is the case as judges can enhance their decisions with almost 100 percent accuracy using special AI tools that prevent human bias. We have been exploring various ways AI has been solving the most pressing issues of this world and how it is making life better for us. We have seen the use of AI in facial recognition software, self-driving cars and several others. One of the most exciting use cases of AI is in the criminal justice system. New advances in AI now offer judges AI systems that can help them easily analyze the risks of a defendant and help them in decision making. Well, AI in criminal justice has also attracted some concerns regarding its inherent bias. Ethical and social concerns relating to consistency and transparency do exist that must be dealt with for AI to effectively help judges in making sentencing decisions and also get accurate assessments of the risks and needs of criminals. The truth is that algorithms are capable of being less biased than humans when implemented correctly.
RISK/NEEDS ASSESSMENT TOOLS A good number of sentencing decisions are likely prone to the impact of unrelated factors due to human differences in opinion, variabilities, and biases which may result in unintentionally unfair outcomes. Did you know that results from a study revealed that while making their sentencing decisions, judges are much more lenient early in the morning and also shortly after lunchtime? Well, that is not all; it was also discovered that they were more likely to assign harsher sentences just before their breaks or at the end of the day. Unfortunately, this is just an example of how an entirely arbitrary factor (time) can influence the sentencing decisions of judges. But there are several other irrelevant factors that can also influence the decisions of judges and result in decisions that are unfair. One of such factors has to do with the vastly different opinions among judges. One judge might perceive the sentence of another (that is seen as fair and impartial) to be absurd. In addition, judges also have their preferred sentencing methods; for instance, some judges prefer giving criminals jail time for certain crimes while others would prefer parole. One of the factors that is responsible for such differences has to do with personal views regarding the effectiveness of different rehabilitation and punishment modes. The point I want you to get here is that the sentencing of criminals often varies significantly simply because of the particular judge that is doing the sentencing. So, what is the best solution to this kind of bias in the judicial system? The answer is simply for the judicial system to leverage AI algorithms to help judges make better sentencing decisions. The use of AI in the judicial system is in solutions known as risk/needs assessment tools. These are algorithms that analyze a defendant's risk of recidivism simply by analyzing their data.
Criminals with higher risk assessments are likely going to commit the crime again. This set of tools have already been in use for a few decades to help in lowering the number of persons incarcerated who may have a low risk of recidivism. It has also assisted the justice system in effectively sentencing people as productively as possible. But AI was not enlisted for this purpose until 1998 and its use as a risk/needs assessment tool marked a remarkable progression from previous tools since many of the old tools comprised mainly of interviews and questionnaires. The old tools were obviously less reliable because we could not effectively analyze the data they produced as impartially and effectively as the fourth-generation AI-powered tools.
Understanding Risk/Needs Assessment Tools The two popular tools currently being used in the justice system are COMPAS and PSA. The functioning principles of the two tools vary significantly, so let's quickly go through the tools. Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) This is believed to be the most notable tool for risk/needs assessment. By analyzing an offender's data points, it can predict the rate of recidivism of an offender, their risk of violent recidivism as well as failure to appear in court. The tool splits the factors into dynamic and static factors. The dynamic factors have to do with employment history, pessimism, and substance abuse while the static factors include previous arrests. COMPAS analyzes the two kinds of data and comes up with a score for recidivism, violent recidivism and several others. There are obvious benefits that COMPAS has over the predictions made by human judges. It is capable of predicting a person’s risks without being prone to several subjective factors that are associated with risk/needs assessments that are human-monitored and controlled. What initial risk assessment systems used in analysis to help them predict and examine the risks and needs of a criminal were questionnaires. But they were not really effective since individuals who analyzed the data obtained from questionnaires often made predictions that were influenced by their biases as they could have assigned more importance to certain data points as against others. This tool is not based on any factor that is subjective or opinion-based; instead, it is based purely on an individual's past data. It is capable of preventing judges' biases and help in criminal sentencing. But it is mainly based on previous cases that were actually decided by human judges who also have their biases. So, COMPAS is believed to find patterns of bias against some group of persons in its data and this eventually makes it biased too. Public Safety Assessment (PSA) This tool makes use of slightly different risk factors in making predictions regarding the rates of violent recidivism, failure to appear in court and rate of recidivism and it is believed to be a less biased risk assessment tool. While making decisions, PSA does
not take into account factors like socioeconomic status as well as self-efficacy. The predictions made by PSA are mainly based on nine risk factors: Current violent offense Age at current arrest Pending charge at the time the offense was committed Previous violent crime conviction Previous misdemeanor conviction Prior failure to appear at a pretrial hearing in the past two years Previous felony conviction A previous case of failure to appear at a hearing that took place over two years ago Previous sentence to incarceration By analyzing and weighing these factors, this tool adds up the number of points a person scored depending on the person's risk factors. The total score forms the individual's risk score and it helps in making predictions regarding the possibility of a criminal reoffending or even showing up for trial. One of the major differences between the two tools is that the method for ascertaining an offender's risk used by COMPAS is confidential while that of PSA is known. The PSA algorithm is published and their decision to do this is quite remarkable since it enables judges who use the AI-enabled PSA to make better analysis of the strength and weaknesses of the tool as a pretrial risk assessment tool. On the other hand, the COMPAS algorithm is actually kept confidential by EQuivant and this comes with some demerits when it is being used. AI in Correctional Institution During the post-conviction phase, new technologies are now being adopted as correctional facilities use AI for the rehabilitation of inmates and automation of security. Available data shows that in a correctional facility in China where high-profile criminals are kept, they are putting in place an AI network to help identify and track inmates 24/7 and also inform prison guards if something appears not to be in the right order. In other locations, prisons are also using AI tools for determining criminogenic needs of inmates that can potentially change via special treatment. This is the case of a Finnish correctional institution where they now have a training scheme for prisoners that is powered by AI training algorithms. In the course of the training, offenders are expected to respond to simple questions or are required to inspect several contents obtained from the internet and also specifically from social media. The goal of these activities is to provide the needed data for Vainu which is the company that is in charge of the prison work. The data will also help provide inmates with new job-related skills that will enable them reenter society at the end of their sentences. Researchers have also disclosed that AI systems can serve as an ideal solution to the solitary confinement crisis in the United States. It is possible to use smart assistants just like Amazon's Alexa as confinement companions for inmates. They will in turn help lower the effect of solitary confinement on inmates.
Neuro-Prediction of Recidivism Available results from a study by researchers at the University of California at Berkeley and Stanford University reveal that when it comes to clarifying the complexity of the criminal justice system as well as providing accurate decisions, risk assessment tools are quite better than humans. Of course, humans are perfectly capable of making predictions for themselves about a defendant that will eventually get arrested for committing another crime when they have just a few variables to work with. However, this is not the case when larger number of variables are involved. In the case of increased variables, AI algorithms usually exceed humans. In fact, the score that these algorithms obtained in some tests (for determining which offenders stand the chance of getting arrested again) was almost 90 percent accurate. On the part of humans, the score was about 60 percent when more aspects are at play. AI Use and Transparency If we must increase the use of AI in the criminal justice system, then there is a need to reconcile the rights of people to know how the AI and algorithms work to the corporation's right to protect their data and material. Also, it is crucial to determine whether or not AI should be held to a higher level of transparency than human decisionmaking. Considering the fact that the details of how algorithms function are not revealed and it is not possible to understand the way they reason, judges that utilize AI tools lack the right information to accurately ascertain the merits and demerits of the tool. Rather, they are just compelled to accept just the score the algorithms provide and make their sentencing decisions without taking into account any context from which the scores were derived. Too much reliance on algorithms without understanding the way they function may cause the decisions that judges make to become more biased than when they work without AI tools. Bias Implications The issue of bias is undoubtedly the major ethical consideration when AI is being used as a risk/needs assessment tool in criminal justice. It is possible to introduce bias to the tool in several ways. If the AI happens to be a neural network, then it has to receive training data. So, if the AI is required to differentiate between leopards and tigers, then the developers must feed it with images of both animals to enable it “know” each of them. When it comes to training neural networks that will serve as a risk/needs assessment tool, they also must be fed data regarding certain criminals and whether such individuals offended. Now, the data an AI tool receives are still from humans and it will reflect a bias that is already present in them and will, in turn, exacerbate it. This implies that humans and AI systems can actually be biased since two different persons can have two very different interpretations of the law and view on punishment for a specific crime. When the data that algorithms get and how they interpret it are bias, then algorithms will be
biased as well.
THE USE OF AI IN ANALYZING PUBLIC SAFETY VIDEO AND IMAGES One of the aspects that will benefit immensely from AI is its use in video and image analysis. Generally, images and videos are often used in law enforcement and criminal justice sectors to get information about objects, individuals and actions that can support criminal investigations. But it is often difficult to go through all the images and videos available as it requires the recruitment of employees who are skilled in the subject matter. Another issue with video and image analysis is that it is affected by human error as a result of the volume of data. Other factors include the frequent changes to operating systems and smart devices as well as the limited number of skilled personnel that can help process such information. Traditional software algorithms that help law enforcement to execute this task are limited to certain features like eye color, the distance between eyes and eye shape for facial recognition while they only use demographics for pattern analysis. AI algorithms that are trained for such tasks can learn not just the complex tasks but are capable of developing and determining their independent complex facial recognition parameters/features while achieving their goals which exceeds the scope of what humans can ever consider. AI systems can easily detect complex events like crimes and accidents that are already in progress or after they have happened and can also identify weapons and other objects. DNA Analysis From the evidence and scientific point of view, AI can also provide several benefits for the law enforcement sector, especially in forensic DNA testing. Over the past few decades, forensic DNA has undoubtedly benefited the criminal justice system. While criminals commit a crime, they can actually transfer biological materials like semen, skin cells, saliva and blood when they contact objects and people. Over the years, there has been a significant improvement in the development of DNA technology leading to the increased sensitivity of DNA analysis. Presently, forensic scientists can even detect and process degraded, low-level or DNA evidence that is otherwise regarded as unviable evidence that they could not have been able to use decades ago. Interestingly, more shreds of decades-old DNA evidence obtained from scenes of violent crimes like homicide and sexual assaults (cold cases) are now being submitted for analysis in laboratories. Increased sensitivity has made it possible for scientists to detect smaller amounts of DNA but unfortunately, this also makes it possible to detect DNA from more than one contributor even at extremely low levels. Such factors as well as others are some of the challenges crime laboratories are encountering.
Detecting the DNA of someone at the scene of a crime that was not involved in the crime may affect the accuracy of the investigation as investigators may find it hard to separate and identify the profiles of each person which is critical for law enforcement. Researchers are exploring the use of AI in this field as they believe it has the potential to deal with this challenge. Since large amounts of complex data in electronic format are usually involved in DNA analysis. Some of the data may possess patterns that may exceed the scope of human analysis but may eventually turn out to be extremely useful with an increase in the sensitivity of systems. Presently, researchers at Syracuse University in partnership with the Onondaga County Center for Forensic Sciences as well as the New York City Office of Chief Medical Examiner's Department of Forensic Biology are exploring the viability of a novel machine-learning-based strategy of mixture deconvolution. Deconvolution has to do with the process of reconstituting elements or eliminating any obstacle or complication. The results of the research indicate that AI technology can help in carrying out such complicated analyses. Gunshot Detection Another aspect where AI is proving to be useful is in the discovery of pattern signatures in gunshot analysis. One project in this space is the one funded by the US National Institute of Justice and carried out by Cadre Research Labs, LLC. The study aims to analyze gunshot audio files obtained from smartphones as well as other smart devices on the basis that the quality and content of such gunshot recordings are affected by ammunition and firearm type, the recording device used, and the scene geometry. The scientists are working toward developing algorithms that can detect gunshots, determine shot-to-shot timings, differentiate muzzle blasts from shock waves, assign shots to firearms, ascertain the number of firearms that were present and also determine probabilities of caliber and class. These are things that can significantly enhance the investigation efforts of law enforcement.
THE FUTURE OF AI IN CRIMINAL JUSTICE On a daily basis, we hear new breakthroughs regarding the potential for AI systems and their application in criminal justice. This will hopefully improve public safety. New technologies like video analytics for integrated facial recognition, the ability to detect people in several locations using closed-circuit TV or even across several cameras and also the detection of objects and activity are some of the things that can help to prevent crime. We can leverage AI and several applications to prevent crime from pattern and movement analysis. We can also identify crimes that are already in progress and assist law enforcement to trace those involved. AI has the potential to detect crime that would otherwise remain undetected through technology like video, cameras and social media as these are some of the sources of data required to make analysis. This will further increase the level of public safety as there will be a significant boost in public confidence
in law enforcement as well as the criminal justice system.
Chapter 9:The Public Sector & AI Key Takeaway AI offers governments the chance to structure and analyze massive amounts of data they already have and use it for social good. We have gone past the era where we just say that AI is a technology for the future. We are already in the era of AI transformation. One of the greatest factors that have sped up the adoption of AI in the public sector is the COVID-19 crisis. Governments should understand that AI is not exclusively designed for the private sector. Government should serve as role models on the ethical use of AI even as they regulate the way private sector companies apply it.
We have already taken a look at how AI can benefit the transportation industry, the criminal justice system, healthcare and several others. AI can also make a significant impact on government processes and if properly applied, can undoubtedly deliver excellent results for society. According to the details of a Microsoft study, the view of two-thirds of public sector organizations is that AI is indeed a digital priority. Unfortunately, only 4 percent of them have managed to scale AI and enjoy improved outcomes that will lead to organizational transformation. So, what are the things that can help the successful integration of AI at scale in government processes? Well, formalized deployment approach, senior leadership support as well as an experimental mindset and culture are all required to ensure the embedding of artificial intelligence at scale. Microsoft actually commissioned the EY organization (just before the world started dealing with COVID-19) to survey public sector organizations operating in Western Europe regarding their adoption of AI applications. I will be making reference to the report to enable you understand how AI can impact government processes, how far they have embraced it and several ways they can ensure a smooth integration of AI technology. The title of the survey is; “Artificial Intelligence in the Public Sector: European Outlook for 2020 and Beyond.” It revealed that two-third of those who responded to the survey viewed AI as a digital priority. Although a good number of local, regional and national governments acknowledged AI's potential, just 4 percent of public organizations that were surveyed have succeeded in transforming their organization courtesy of AI. This implies that 10 percent of respondent organizations are dealing with complex issues by leveraging AI technology while 9 percent are transforming how they work. Also, 12 percent leveraged AI to create remarkable value for external stakeholders like businesses and citizens.
COVID-19 AND THE USE OF AI IN THE PUBLIC SECTOR Perhaps the greatest factor that has sped up the adoption of AI in the public sector is the COVID-19 crisis. The pandemic compelled people, services (from the public and private sector), and processes to move online. It even compelled regional, local as well as national governments to go online and lead by example. Within a few months after the outbreak of the coronavirus, they: Learned how to handle a remote workforce. Digitized on a large scale (and very quickly) which is an achievement that would have appeared impossible a few months before the outbreak of COVID-19. They collaborated with the private sector in a bid to close up the skills gap that existed then and also create innovative solutions. Leveraged AI as a strategic weapon for dealing with COVID-19 – using it as a tool for tracking and tracing contacts and also educating the public.
For citizens who have always desired that governments match the quality, speed as well as personalization they enjoy from the private sector, this gradual shift is a good one. The truth is that as we continue to adapt to this new environment, many people also expect that government does not just integrate AI applications into different public sector processes, but also build on the progress they have already recorded by transforming into a “digital first” in the various services they deliver. Despite this success achieved so far, artificial intelligence is capable of tackling some of the most complex issues that the world is facing right now – such as the pandemic, inequality and even climate change. Undoubtedly, AI, if properly utilized can turn out to be a significant solution as long as governments are willing to take advantage of its full potential.
Figure 25: Benefits of AI adoption in the public sector
The results of the survey also revealed four major benefits that AI can provide: It will help in the optimization of processes to make them more productive and efficient. In fact, it is possible for the public sector to significantly enhance their workflows simply by making use of AI to route inquiries, cut
down on the rate of errors and enable the automation of redundant tasks. A remarkable transformation of services and an increase in the quality of such services and even develop new ones. For instance, by analyzing the information of each patient to ensure they get personalized treatment, AI can significantly improve patient outcomes. Ensuring that businesses, citizens and partners enjoy a better experience simply by engaging stakeholders. AI is already enhancing the user experience for passengers in transportation simply by leveraging both historical and real-time data to make predictions on demand and also ensure the availability of services at the time they are needed. Ensuring that employees enjoy significantly improved results even with minimal effort. So, the use of virtual assistants for instance can remarkably cut down on the time spent replying to basic inquiries and decision-making can also be improved courtesy of predictive analysis. These benefits when combined will certainly empower public sector organizations to not just optimize their processes but also provide world-class services and even deal with long-term global issues. When it comes to the approach of public sector organizations, it was discovered that they all fall into one in four different groups.
Figure 26: Approach to AI adoption in the public sector
Emergent (24 percent): Those who belong to this group acknowledge AI's importance for the future but have not started their AI journey. Adopters (41 percent): This group includes those who are already experimenting as some of them are in the early-stage solutions with pilot projects; however, AI has not been embedded across the organizations. At this point, AI is just improving processes and not services. Innovators (31 percent): Those who belong to this group are already embedding AI into their core services as well as their digital strategy. Also, clear guidelines and processes are already in place as innovators are gradually beginning to work across the organizations as they come up with solutions. For those who fall into this category, AI is already making remarkable improvements to how they work as well as the services provided. Transformers (4 percent); This group includes public sector organizations that are presently making use of AI to transform how they provide public services and they are also enjoying immense support for monitoring and continuous improvement.
Lessons from Organizations that Belong to the group of Transformers The organizations that belong to the group of transformers had something extremely in common and that is; Their leaders strongly believed in the potential of artificial intelligence. There is a high level of commitment to AI by top management. In fact, AI was also regarded by this group as an outstanding strategic priority (44 percent of AI leaders vs. the rest which is just 8 percent.) Also, they possess an outstanding commitment across various levels of leadership – from executive and political to project and line functions. Another outstanding quality of transformers is that they appear to possess a stronger leadership focus on objectives which includes enhanced experiences for employees and citizens, decision-making and optimizing resources as well as quality and risk management. Organizations that belong to the group of transformers can create a conducive environment where the appropriate skills and structures can develop – from technology, data governance to ethics and culture. However, what truly makes the change happen are people and what is extremely vital for firms to completely leverage the potential of AI is the ability of government organizations to attract and also develop the right talent and even offer them the right conditions to thrive.
Figure 27: Accenture research result
Experts in the AI space believe that we have gone past the era where we just say that AI is a technology for the future. We are already in the era of AI transformation and governments no longer need to put off the adoption of this amazing technology. The sooner government organizations adopt AI, the sooner they will increase citizen satisfaction and also become more cost-efficient. In another survey of public sector leaders carried out by Accenture, it was discovered that 83 percent of senior public sector leadership can and are willing to embrace AI technologies. In Singapore, the government is already leveraging AI to answer questions from the public. In the UK, the Department for Work and Pensions has already leveraged AI to help handle incoming correspondence. In the United States, the Department of Health and Human Services has executed a pilot with AI to help work on thousands of public comments on regulatory proposals. The results from the Accenture research also revealed that 80 percent of public service leaders agree that the job satisfaction of their existing employees will be improved by implementing AI technologies. About 75 percent of students disclosed that they would
definitely be comfortable having artificial intelligence on campus. Their survey also disclosed the fact that most AI applications are assisting government organizations in automating mundane and repetitive jobs while giving employees time to concentrate on higher-value tasks. Consumers in six countries were part of another Accenture survey and the result of the survey revealed that half of those involved were excited to use public services rendered by AI. This figure may likely increase at a rapid rate even as citizens experience the dramatic improvements courtesy of AI in addition to the private sector making AI commonplace.
THE ROLE OF HUMANS IN THE DIGITAL STATE One of the hottest debates in the AI space has to do with the possibility that AI systems may eventually make humans obsolete. Of course, the adoption of AI technologies may affect some roles, but there are also new roles that will emerge to support the adoption of AI. Some of these roles include automation experts, conversational specialists, and machine trainers. Records from the research revealed that organizations in the public sector understand that for us to fully realize the potential of AI, the role of humans is essential. Actually, organizations that are “AI mature” focus on establishing a blended workforce that permits the coexistence of technology and civil servants and where both complement each other. What this implies is that AI systems will be allowed to handle tasks that are repetitive and too dull or complex and high-volume tasks. This will in turn grant employees more time to concentrate on tasks that are more valuable or channel their skills and time to new roles that have been created courtesy of AI transformation. Another crucial aspect of AI in the public sector is that instead of replacing professional judgment, it can offer insights that will facilitate decision-making on very complex issues. With the increased realization of a digital state, governments are responsible for empowering their staff with the new mindset, skills and connectivity required to thrive and this includes employees working remotely. This will ensure that citizens enjoy the best from digital public services as well as the entire digital economy. In fact, they must continue to work with the private sector to ensure that they share knowledge and innovate. AI as a force for change will only remain elusive until governments around the globe get this integration process right.
Preparing Employees for an AI-Driven Future
Figure 28: Preparing employees for AI-driven processes
As I mentioned earlier, we are no longer in an era where we believe AI is the future; instead, now is the time to embrace the technology. Here are some vital steps that policy makers and leaders in the public sector can take to prepare employees for AIdriven processes. Ensure that senior leaders are change agents in every organization. This implies that they should be involved in the strategic prioritization of AI to reflect the mission of an organization. It also involves coming up with a strategy and implementation plan that is straightforward. Influential leaders who are ready to challenge current methods of working and have a strong desire to innovate should lead AI initiatives. It is also crucial for them to recognize and incentivize AI advocates in various leadership functions and levels. Inculcate an AI development mindset in every organization, encourage and incentivize workers to upskill in challenging skills such as engineering, data science, and domain expertise. Also, incentivize employees to upskill in soft skills such as innovation change management and collaboration. The truth is that the employees of tomorrow will require both skills to ensure that they complement AI's capability. There is also a need to encourage frontline employees to adopt new technologies that transform their daily roles and also realize that technology is not going to “replace” them; instead, it is available to work for them. Build a formal approach for handling data and AI tools in a structured manner. It is crucial to have an excellent governance approach to provide accountability for progress, direction and oversight. Such an approach may
involve developing processes, guidelines, and procedures that clearly set out why, when and how AI and other technologies should be used. To help foster trust among citizens and employees, it is crucial to ensure there are ethical frameworks for mitigating bias, protecting privacy and also responding to regulatory changes. Undoubtedly, AI can help in dealing with the biggest challenges that governments around the world are facing and can significantly improve citizens' lives. Companies around the globe are already leveraging AI to make better decisions, automate mundane tasks and dramatically improve customer experience. Governments should understand that AI is not exclusively designed for the private sector. It is also capable of transforming public sector organizations and how they run their processes. It is interesting to also note that governments are gradually waking up to this new reality of an AI-driven world even as citizens are yearning for a similar revolution in services that they are already enjoying in the private sector. So, governments should be prepared to have a single view of their citizens' data and also share it across different departments without compromising the privacy of their citizens. This also implies that governments should leverage the available data to create new services, predict the needs of their citizens and act accordingly to prevent a crisis. There are two great opportunities that AI is offering governments and this does not really apply to the private sector. AI offers governments the chance to structure and analyze massive amounts of data they already have and use it for social good.
Figure 29: Benefits of AI in government
With this data, governments can quantify and cut down on inequalities in opportunities and outcomes. It is also offering them the chance to share the available data with third parties (as long as they are prepared to also keep the data private) who can develop
services or apps that can improve their citizens' life. Another opportunity that AI offers governments is that they can determine how citizens use and benefit from such technologies. Government should serve as role models on the ethical use of AI even as they regulate the way private sector companies apply it. They should enlighten citizens on how to be prepared for its challenges too. However, governments will still be exposed to similar risks that private companies are facing like building bias into algorithms. Considering the regulatory role of government, any form of data breach could irreparably damage trust in government. This also highlights the need for governments to make use of a “trusted AI” framework. This is to ensure that they do not only take into consideration the way AI-based systems function but also recognize and mitigate potential risks in various stages of the solution.
Chapter 10: AI in Agriculture Key Takeaway BI Intelligence Research has projected that by 2025, global spending on smart, connected agricultural technologies and systems which includes machine learning and AI will triple – hitting $15.3 billion. The combination of machine learning, IoT sensors (that make real-time data available for algorithms) and AI can dramatically improve crop yields and cut down on cost of food production. In thirty years, we will have more mouths to feed with a limited amount of fertile soil. AI explores ways to assist farmers to significantly increase their yield without making use of additional resources by analyzing operational data and also pointing out areas of inefficiencies in different processes. Advances in the field of AI and machine learning have made it possible for developers to create AI algorithms that can even monitor livestock vitals to detect any health issue. It is estimated that the world's population will be close to 10 billion by 2050 and this means there will be more mouths to feed as our population continues to increase. Presently, about 37 percent of that total land surface is utilized for crop production. Of course, we all understand the importance of agriculture in our lives – from contributing to national income to employment generation, etc. Agriculture is a major contributor to the economic prosperity of developed nations and in developing countries, it is playing a crucial role in their economy too. It is undoubtedly among the oldest and most crucial professions in the world – a $5 trillion industry. As I mentioned before, the world's population is projected to hit about 10 billion by 2050 and to meet the food needs of the growing population, there is a need for an increase in agricultural production by 70 percent. But have you tried imagining an industry facing as many threats as agriculture? We often hear the popular saying that “You reap what you sow,” but that is sometimes possible “if you are lucky.” Farmers hardly talk about yields when diseases affect their crops or when the weather strikes, especially with the challenges of climate change. It was even more difficult for farmers to manage different processes when the global pandemic hit and this is mainly because most farmers are not digital. While the world's population continues to increase, urbanization is also on the increase. We are witnessing significant changes in our consumption habits and disposable income is on the rise. Farmers are presently under immense pressure to meet the increasing demand. The truth is that in thirty years, we will have more mouths to feed with a limited amount of fertile soil. What this also means is that we must go beyond traditional farming. It is
time for farmers to discover ways to minimize their risks or at least manage them. One of the most promising or fertile industries for AI and machine learning is the agriculture industry.
WHY ARE FAMERS FINDING IT DIFFICULT TO EMBRACE AI? Most farmers seem to believe that AI is a technology that is useful only in the digital world. Many wonder how AI can help their work which is on physical land. Their resistance is not because they are concerned about the unknown or conservative. Instead, the reason for their resistance is their lack of understanding of the various ways AI tools can make their job easier and increase their productivity. Generally, we find new technologies to be unreasonably expensive and sometimes confusing and this is mainly because AgriTech has not been able to provide a straightforward explanation of the usefulness of their solutions and how they are to be implemented. This is also the case with the use of AI in agriculture. While AI is extremely useful for the agriculture industry, technology providers have a lot of work to do to assist farmers in the appropriate implementation of AI.
How AI can Enhance Agriculture The combination of machine learning, IoT sensors (that make real-time data available for algorithms) and AI can dramatically improve crop yields and cut down on cost of food production. Available data from predictions made by the United Nations on population and hunger revealed that our population will increase by two billion by 2050 and we will need an increase of 60 percent in food productivity if we are to provide sufficient food. Based on the available data from the Research Service of the US Department of Agriculture, growing, processing and distributing food is actually a $1.7 trillion business and this figure is just for the US alone. Interestingly, AI and machine learning are already proving to assist in closing the food gap for the predicted 2 billion people by 2050. Have you considered what it takes to have a minimum of 40 vital processes, excel at them and even monitor them all at the same time across hundreds of acres of farming area? Well, machine learning is perfectly capable of dealing with tasks that include gaining insight into how several factors such as seasonal sunlight, the use of specialized fertilizers, weather, migratory pattern of animals, insects, birds, planting and irrigation cycles as well as the use of insecticides by crop all affect yield. The truth is that the financial success of a crop cycle now depends more on excellent data and this explains why farmers, agricultural development firms and co-ops are now focusing more on data-centric techniques and even increasing the scale and scope of how they utilize AI and machine learning to improve the quality and yield of their crops.
BI Intelligence Research has projected that by 2025, global spending on smart, connected agricultural technologies and systems which includes machine learning and AI will triple – hitting $15.3 billion. Markets & Markets also predicted that the funds that will be spent only on AI solutions and technologies in agriculture in 2026 will grow from the $1 billion which was recorded in 2020 to $4 billion – reaching a 25.5 percent compound annual growth rate (CAGR). Also, PwC revealed that the fasted growing technology segment of smart and connected agriculture is IoT-enabled Agricultural monitoring and is projected to hit $4.5 billion by 2025. Let's examine AI's potential to transform agriculture. There are various processes and phases involved in agriculture and a major part of these processes are manually executed. The combination of manual activities with new and unique technologies such as AI can help handle some of the most routine and complex tasks. Farmers can acquire and process big data digitally and make decisions on the most viable course of action. The illustration below is a summary of AI's role in the agriculture information management cycle.
Figure 30: Agriculture information management cycle
ADVANTAGES OF AI IN AGRICULTURE I need to make this point clear; this chapter is not in any way suggesting that artificial intelligence is a know-it-all technology. Rather, it offers businesses insights obtained from data that is already in existence or helps to automate various farming processes that initially needed human involvement. I have categorized the benefits that AI provides for the agricultural sector into four: Yield Improvement: AI explores ways to assist farmers to significantly increase their yield without making use of additional resources by analyzing
operational data and also pointing out areas of inefficiencies in different processes. For instance, they can help agribusinesses to eliminate human errors and even streamline most of their processes by automating animal and crop farming processes with drones (for milking or sowing) which will, in turn, boost the quality of labor. Profit: Artificial intelligence also helps to increase profit for farmers since it boosts yields and does not require farms to make use of extra resources. Costs Reduction: AI solutions can easily identify areas of wasteful resource consumption by using the data on how different resources such as fertilizer, herbicide, water or even energy are distributed and make suggestions on ways to optimize the usage of such resources. The preventive practices of monitoring the performance of equipment and the health of livestock can help cut costs related to equipment repairs and vet services. Ensuring Sustainable Farming Practices: Sustainable agriculture is focused on identifying how to satisfy the current textile and food needs without consuming a lot of resources in the process which could leave the next generations with little or nothing. Farmers can identify sustainable patterns of resource consumption by leveraging artificial intelligence. This will also help prevent land degradation and scarcity of water.
AI USE CASES IN CROP FARMING The truth is that modern-day farming is gradually getting smarter with the use of AI and machine learning. Advances in the field of AI and machine learning have made it possible for developers to create AI algorithms that can even monitor livestock vitals to detect any health issue. In fact, presently, farmers can classify tomatoes by their variety using machine learning. Before we examine the different ways AI can handle various farming processes, let’s look at the lifecycle of crops. The Lifecycle of Agriculture (Crop Farming)
Figure 31: Crop farming cycle
To help us get a clearer picture of various ways AI can benefit the agriculture industry, it is crucial to look at the lifecycle of agriculture. Basically, several parts are involved in the cycle as seen in the illustration above. 1. Soil preparation: This is the initial stage of farming and during this stage, farmers ensure that the soil is ready for sowing seeds. At this point, they need to break large soil clumps, remove rocks, roots, sticks and other debris. This is also when they add fertilizers as well as organic matter and create the perfect condition for crops based on the type of crop. 2. Sowing: This stage involves taking care of the depth for planting seeds and
3. 4.
5.
6.
7.
the distance between two seeds. Different climate conditions like rainfall, humidity, and temperature play a key role at this stage. The use of fertilizers: Maintaining soil fertility is essential for an excellent yield and to enable farmers continue to grow healthy and nutritious crops. Irrigation: This stage ensures that the soil remains moist and that humidity is maintained. Overwatering or underwatering may affect the growth of crops and crops could be damaged if irrigation is not done properly. Weed protection: In case you are not sure what weeds mean, they are unwanted plants that usually grow close to crops or other locations in the farms. Since weed can lower yields, weed protection is an essential factor that can boost yield. However, it is also one aspect of farming that increases production cost, lowers the quality of work and interferes with harvest. Harvesting: This process should already be familiar based on the name. It is the phase where farmers gather ripe crops from their farms and it is a labor-intensive activity because a lot of laborers are required to complete the job. Also, this phase has to do with post-harvest handling which includes cleaning of the crops, sorting, packing and cooling them. Storage: Part of the post-harvest stage of farming is the storage of crops as they are preserved in a way that will guarantee the availability of food even outside of the harvesting season. This phase also covers the packing and transportation of the crops.
Figure 32: Three classes of AI tasks in agriculture
We can simply classify the different tasks that AI carries out in agriculture to data analytics (highlighting inefficiencies by analyzing operational data), workflow automation (executing tasks that only humans performed before with robots) and personalization (increasing sales drastically by properly adapting to demand). Let’s examine these processes and AI use cases further. 1. AI in Soil Preparation
Figure 33: AI & soil protection
An AI solution such as Plantix is capable of detecting nutrient deficiencies and soil defects by analyzing data obtained from different sources – from smartphone cameras and sensors strategically positioned in the soil or the ones provided by soil analysis drones. With this information, farmers can easily determine the quantity and type of organic matter they need to add to ensure that the soil is perfectly suited for any particular crop. 2. AI and Sowing AI can also aid farm workers in locating places that are most suitable for sowing specific crops based on the soil's chemical composition, the field's geographical characteristics or other parameters by analyzing drone-supplied imagery. By making use of AI-enabled crop planning tools such as eAgronom, farmers can ascertain the quantity of each crop they need to sow in a greenhouse every week and even determine the perfect time for transplanting the crops. Of course, a comprehensive crop plan does not just cover the sowing process, but it is interesting to know that when farmers leverage such tools, they can significantly cut down on their use of herbicide and fertilizer by 25-35 percent. That is not all as it can even boost yield by 3-4 percent. Farmers can also filter out subpar seeds and even sort
seeds that are mixed by making use of available data and AI algorithms that instruct farming robots. An excellent example of such robots is the one developed by the US Department of Agriculture's Agricultural Research Service computer-vision seed sorter. 3. AI and Fertilizing Agriculture AI solutions can take advantage of the data obtained from soil sensors, precision agriculture software, smartphone images (like Spacenus's Agricultural Nutrient Assistant) and soil analysis drones to continuously monitor the soil's nutrient levels and if required, they can also cross-check them with previous levels that historically provided the most bountiful yields on different crops. AI solutions can also recommend the fertilizer most suitable for the soil which the agribusiness has in stock simply by using sensors placed in the storage rooms and automatically dispatch a drone such as PrecisionHawk to spray the fertilizer on the field. 4. AI and Irrigation Part of the things that will ensure adherence to the concept of sustainable farming is making smart use of freshwater. AI tools such as WaPOR, Heliopas and CultYvate, can help in this regard. While some of these AI tools simply monitor water usage productivity, others take things a step further by automating irrigation workflows. Some farmers also take things even further in leveraging AI capabilities by analyzing historical irrigation data, comparing the data with crop health as well as yield statistics and coming up with the most efficient consumption patterns that meet all conditions. Also, farms can even modify their irrigation plans with AI solutions such as Fasal to use free rain water. AI in agriculture can even ascertain the quality of the water too. 5. AI and Crop Protection Perhaps the aspect of agribusiness that has the most use cases of AI is the aspect of crop protection. There are four ways AI helps in crop protection:
Figure 34: AI and crop protection
Pest Attack Prediction: By examining drone or satellite imagery, observing new incoming data to identify possible signs of an attack or uncovering patterns in pest activity, AI can make pest attack predictions. This will provide farmers the right information needed to prevent attacks that could affect crop health or choose to use pesticides to stop such attacks. In India, Wadhwani AI has developed an early pest warning system specifically for cotton crop protection. Optimization of Herbicide and Pesticide Consumption: The primary reason for optimizing the use of herbicide and pesticides is to ensure that farms are not only sustainable but also to guarantee the safety of our foods. It is now possible for AI solutions to identify weed and pest activity in farms and also modify the spraying of herbicide and pesticide on weeds and pests instead of merely carrying out such tasks on fixed schedules.
AI solutions that can handle weed control are now developed by firms such as PyTorch and Blue River Technology. Mobile apps such as the one provided by the Food and Agriculture Organization of the United Nations and Pennsylvania State University, “Nuru” can handle the task of pest control now.
Intruder Identification: This is the aspect of farming that is focused on ensuring the security of the field territory. With surveillance camera footage or drone imagery, AI solutions are now capable of monitoring and detecting unauthorized humans, birds, and wild animals that may damage crops. One of the leading solutions in the smart surveillance space is Twenty20 Solutions. This is one of the best ways AI solutions can help lower the rate of crime and theft in farms. Monitoring Crop Health: This is also possible with soil and plant sensors in addition to drones or satellites' multispectral images. The data obtained from such images will enable AI solutions or intricate unsupervised machine learning algorithms to quickly identify and predict diseases in crops. This is undoubtedly an excellent way to lower the rate of crop loss and at the same time increase yield significantly. One popular app for monitoring crop health on vineyards is VineView which is also used for irrigation and harvesting. Weather Prediction: Solutions that fall within this category assist farmers to detect extreme weather conditions and prevent crops from damage before heavy winds or rains begin. An excellent example of an AI solution that provides this service is the OneSoil app.
6. AI and Harvesting The last stage of the plant cycle we earlier discussed is the harvesting stage. AI solutions for harvesting, compare the current footage of a farm to how the crops looked at the same time in the growing cycle of the previous season to enable them accurately predict the right time for harvesting a crop. Once the harvesting time approaches, robots can commence the removal of crops from the field. Within the key farming regions of the United States – Arizona and California, there have been losses of millions of dollars due to the unavailability of laborers. The Harvest CROO Robotics disclosed that their robot is able to harvest eight acres a day which will replace 30 human laborers. By using a robotic harvesting solution like the one provided by Harvest CROO, farmers can now pick strawberries and this helps to lower waste, improve the safety of food and cut down on CO2 emissions by as much as 96 percent compared to traditional manual harvesting procedures. AI and Animal Farming The second aspect of farming has to do with animal farming and we have also seen the process of animal farming starting from birth or purchase of the animal to harvesting. AI can also play a vital role in ensuring the increased efficiency of the processes involved in this type of farming. This farming process differs significantly based on the end product – eggs, meat and milk. The illustration below is a simplified process of animal farming. Please note that some steps may take place in an order that differs from this one while in some cases, some stages can be skipped completely.
Figure 35: Animal farming cycle
1. AI and Animal Birth/Purchase AI farming solutions can analyze data obtained from past reproduction cycles and predict the number of animals that may likely be born. They can also estimate the cost of care for the number of animals that will be born to guide reproduction plans where required. AI can also calculate the number of animals a company needs if they are buying animals. The focus of such calculations is on getting the highest yield and still maintaining a balance with the number of animals any given farm can sustainably take care of based on stipulated farming conditions. 2. AI and Animal Housing AI can assist farmers to put in place the perfect conditions required to rear animals and even lower animal treatment costs by making use of historical data on livestock disease which will also be correlated with different housing conditions such as animal density,
humidity, temperature and several others. An excellent example of a firm that already takes advantage of AI in delivering solutions for smart cow housing is Lely. 3. AI for Animal Grooming and Cleaning AI can also select and schedule grooming and cleaning procedures by making use of correlation capabilities and livestock sensors to ensure that the farm does not engage in excessive maintenance which will lead to waste of resources or affect the health of livestock due to insufficient cleaning and grooming. One of the key factors that ensure livestock health is barn hygiene and this explains why companies such as GEA and Lely develop barn cleaning robots. 4. AI Solutions for Animal Feeding and Grazing Finding the best balance between feeding and grazing is often a difficult task but AI solutions can also help farmers in this area. With AI, farmers will know: The quantity of salt cows require to stay healthy based on age, breed, etc. The kind of foods that work best for various types of livestock. How to control pastures to ensure better grazing quality which could be through soil health monitoring or paddock rotation. Manage grazing schedules around weather predictions to enhance the health of livestock. One AI solution that helps with smart feed management, improves animal health and boosts milk production is Cainthus's ALUS. 5. AI and Health Monitoring One of the major sources of cost that animal farmers have is disease management and it is good to also know that this is where AI offers the most value. In fact, AI solutions in the aspect of health monitoring enable animals to recover two to four times faster and the process requires the use of fewer antibiotics. Apart from lowering the cost of disease management, AI can also significantly improve the safety of the food we consume. AI solutions can continuously and efficiently monitor things like metabolism, body weight and livestock's vitals as well as other parameters. They can notify a vet if the health of an animal gets worse. To help prevent diseases, new AI solutions can continuously search for new correlations between care conditions and livestock health. Examples of AI solutions that are already used for monitoring cow health are Ida, ALUS, and Rex while Piguard monitors possible changes in pigs' health and wellbeing. Bees are not left out as ApisProtect helps to keep an eye for their wellbeing too. 6. AI and Breeding Fertile-Eyes, a tool developed by Verility is one of the top AI solutions for smart breeding. The tool not only identifies livestock's reproductive quality but can also assist with fertility management. Farmers can get excellent breeding advice with AI solutions
like Ida which provides other services in addition to animal reproduction. Breeding that is focused on the improvement of product quality is another AI use case in breeding. 7. AI and Harvesting Of course, a good number of harvesting procedures are now automated by machines. This includes egg picking robots, meat finishing, and milking. Apart from enabling and facilitating harvesting, AI can also provide ways to maximize the quality of output. In fact, by using 3D imaging, AI can determine live weight of pigs, chickens as well as beef to predict the ideal harvesting points and eliminate over-finishing costs. They can also maximize milk outputs by estimating live weight, lameness in dairy cows, milking traits and other factors.
Challenges Facing AI in Agriculture
Figure 36: Challenges of AI in agriculture
Despite the amazing use cases of AI already in existence, one obvious conclusion you are likely to make is that AI's popularity is already at its peak. Well, the use of AI in farming is growing, but not yet there. AI's reputation for being a risky technology appears to also have an impact on the level of adoption in agriculture and this has also limited its growth. The major factors that have caused the slow adoption of AI in this sector include: Cost: Issues relating to the cost of AI solutions is the first major challenge facing its increased adoption. AI is undoubtedly a high-end tech that needs time and workers. If you are in agribusiness and worried about the cost of AI solutions, you will be amazed to know that it is not really as high as most people believe.
The Prolonged AI Adoption Process: It is crucial for farmers to realize that AI solutions are just an advanced aspect of what was once simpler methods of processing, gathering and monitoring field data. Therefore, it needs the appropriate infrastructure to function optimally. This is the main reason why some farms that have already established some simpler technology find it hard to take a step forward in adopting new technologies. In fact, software companies are finding this issue a serious challenge. However, the solution is for software firms to gradually approach farmers by first providing them with simpler technology and as soon as they become used to a solution that is less complicated, stepping up to solutions with AI features will also be reasonable. No Experience with Emerging and New Technologies: First, I need to point out that the agriculture sector of developed countries like the US and Western Europe differ significantly from those in developing countries. While it would be easy to sell emerging technologies to certain regions, selling them to locations where agricultural technology is not common may be difficult. The best approach for tech companies interested in doing business in locations with emerging agricultural economies is a proactive one. Apart from just making their products available, they also need to help in staff training as well as continuous support for agribusiness owners who are prepared to embrace their innovative solutions. Farmers are free to make modifications to a lot of details regarding the way they manage their agribusiness by leveraging these AI capabilities in agriculture that are already available. They can now automate workflows that were initially time-consuming and also uncover opportunities for optimizing most of their farming activities. Perhaps AI's ability to predict yield and optimize agricultural conditions that can further increase yield is the most crucial task that AI farming performs.
Chapter 11: Impact of AI in Marketing & Advertising
Key Takeaway Some AI solutions can also partially or fully create ads for you in line with what is best for your business goals. AI marketing involves the use of AI solutions to make automated decisions based on data acquisition, data analysis as well as deeper observations of economic trends that may influence their marketing efforts or observations of the audience. AI systems are smart and can analyze data at scale to enable them predict the meaning of that data after learning from “training data.” One of the major use cases for AI in advertising is performance optimization. In the early stages of your AI program, ensure that your AI platform does not and will not at any point violate the conditions of acceptable data use all because of personalization.
The field of marketing is next on line as we explore the various use cases of marketing. Organizations and their marketing teams are rapidly embracing AI solutions to help improve customer experience while encouraging operational efficiency. Marketers are now increasingly gaining a more refined and comprehensive understanding of their target audiences. With the insights obtained from this process, they are better equipped to drive conversations and significantly lower their marketing team's workload. Although there are several commercial platforms that leverage AI for creating ads with no human interference, AI is playing a wider role than just for ads. It is transforming the advertising world at several levels – the creation of ads, audience targeting, ad-buying, etc. With dozens of use cases for AI in this space, more companies are now leveraging the technology to do things like:
In terms of digital advertising, this will have a significant impact on a brand's competitive advantage. It will also have profound implications on the careers of marketers who are in charge of planning and running ad campaigns. AI systems are smart and can analyze
data at scale to enable them predict the meaning of that data after learning from “training data.” An AI-enabled solution can recommend leads that have increased chances of closing and those you should talk to next. Also, they can recommend leads based on your site behavior and even predict how to score leads. It does not necessarily mean that a system that makes use of AI is accurate. But what makes an AI solution accurate is the quality of the available data – if it makes use of the right data and was developed to utilize the data in a useful manner at scale. So, we can conclude that the output of an AI system is just as good as the input. However, some AI systems are capable of improving their level of accuracy over time and this could be achieved by human training. As I mentioned before, they could even improve their accuracy by training themselves too.
The reason why the performance of some AI solutions improves over time is in direct response to the quality of data they analyze. The more data they are exposed to, the more information an AI solution has to make predictions. Some traditional CRM systems are designed to flag leads that engage in high-priority actions on a website. Such actions could be a request for consultation or download of an eBook. Then based on those actions, they will assign a lead score to the contact. But for AI-powered CRM systems, this may be different. They can analyze how well the
lead scoring rules have performed over time simply by comparing historical data and this will be done without your interference. The AI solution can also make adjustments to the score and establish new ones based on what it identifies to be working. This is one of the reasons for the increased adoption of AI in marketing and advertising. Another factor that has played a key role in the increased use of AI advertising solutions is the data obtained from ad platforms, marketing automation, CRM systems and several others. So, what exactly does AI marketing mean? It is the use of AI solutions to make automated decisions based on data acquisition, data analysis as well as deeper observations of economic trends that may influence their marketing efforts or observations of audience In marketing efforts where speed is essential, AI is commonly adopted. To identify the best ways to communicate with customers, AI tools learn from data and customer profiles to enable them serve customers with properly timed and tailored messages without the interference of marketing teams and this leads to increased efficiency. Presently, many marketers require AI for augmenting marketing teams or for executing jobs that are more tactical and do not require much human nuance. Check out the major AI marketing use cases available in the diagram below
BENEFITS OF AI IN ADVERTISING By providing enhanced user experiences and drastically reducing the rate of human error, AI actually provides a competitive advantage to advertising campaigns. The reason why AI in advertising is recently getting the attention of many organizations is due to its numerous potential benefits. Available data from an IDC spending guide
shows that global spending on AI is projected to grow from $50.1 billion in 2020 to $110 billion in 2024. So, why are many organizations suddenly increasing their investment in AI advertising solutions? It all has to do mainly with the amazing benefits it has to offer the sector. Here are some of the key benefits of AI in this industry. Provides Better Personalized Experiences Every customer desires experiences that are tailored to them; in fact, Forrester revealed that in the United States, for instance, 80 percent of customers are prepared to trade some of their personal information as long as they enjoy an improved and more personalized approach from retailers. What is the purpose of personalization? It simply enables businesses to build relationships with their customers or clients across touchpoints. This could be achieved via the use of conversational marketing. It could also be accomplished by optimizing creative messages that can properly connect with audiences. As a business, when you personalize experiences, you will experience several benefits, especially in terms of increased brand loyalty as well as more relevant advertisements. Choosing the Perfect Influencers One of the most effective ways to increase brand loyalty and boost sales is by leveraging influencer marketing. If your brand desires to develop more personalized connections, then one of the options available is to use influencer marketing. Faster Decision Making Marketers can easily and quickly make better decisions when they have the right insights, especially in a marketplace that is always changing. It is not just enough to come up with creative ads, it is equally crucial to ensure that your ads remain relevant to your target audience. With AI, you can now make quicker decisions in tailoring your messaging and even make modifications to your campaign focus where required. Enhance ROI Among the greatest challenges advertisers and marketers encounter is in estimating how effective their campaigns were and what they can do to enhance their results. Companies can clearly determine what is working for them and what is not by leveraging analytics. Your marketing team can even cut down on waste, enhance marketing ROI by providing the perfect messaging for their target audiences. Target the Right Audiences It has always been a big challenge for marketers to ensure that their ads are targeted at the right individuals. AI solutions help to make campaigns more effective and actionable simply by analyzing different datasets to enable them ascertain the possibility of a potential customer taking a particular action. AI solutions can also develop look-alike audiences based on past campaigns to help reach new contacts and create an excellent
sales funnel. It is also possible to take advantage of location data and AI to target specific persons close to your business to help boost foot traffic. Budget and Target Optimization One of the major use cases for AI in advertising is performance optimization. Commercially available solutions use machine learning algorithms to analyze how ads perform across certain platforms and even take it one step further to provide recommendations regarding ways to improve performance. Some of these AI marketing platforms leverage AI to intelligently automate actions you believe you should take in line with best practices which saves time. Some AI platforms can actually reveal performance issues you never knew you had. Also, AI can automatically manage ad performance as well as spend optimization in the most advanced cases. This implies that such AI platforms can make decisions independently regarding the best ways to attain your advertising KPIs and also recommend a fully optimized budget. Although the ad copy and creative is important, your ad targeting is equally crucial (if not even more important than your ad copy). Businesses now have a wide range of available consumer data thanks to platforms like Facebook, Google, Amazon, and LinkedIn. You can now target audiences through mobile and desktop advertising automatically. Of course, you would agree with me that getting these things done manually is no longer an efficient option. We now have AI systems that examine the past audiences of a business as well as their ad performance, then weigh it against their KPIs in addition to real-time performance data being received and discover new audiences that can possibly purchase from the business. One excellent example of such a tool in the AIenabled ad space is Albert. Creating and Managing Ads Some AI solutions can also partially or fully create ads for you in line with what is best for your goals. Some of the available social media ad platforms already have the functionality that leverages intelligent automation to make ad suggestions that you can run in line with the links you are promoting. Third-party tools can also write an ad copy by making use of smart algorithms. Such systems leverage two AI-enabled solutions – natural language generation (NLG) and natural language processing (NLP) – to write ad copy that is as good as the ones written by humans or in some cases, better. What is even more exciting is the fact that they can write the ad copy at scale and in a fraction of time. A good example of such a tool is Phrasee which can automatically write better email subject lines than humans. This same AI-enabled functionality has also been adapted to write Facebook ads and push notifications automatically. Pattern89 is also another excellent tool that is capable of predicting winning Instagram and Facebook ad creative before users can even launch the ad campaign. By using this type of tool, you will be able to determine what works for your campaign before you start spending your funds in running the ads.
AI MARKETING AND ITS CHALLENGES Of course, everything in existence will have a downside and this is also the case for AI marketing. Modern marketing depends to a great extent on an in-depth knowledge of the preferences and needs of customers and being able to act on the knowledge more effectively and quickly. One of the factors that has brought AI to the forefront for marketing stakeholders is its ability to make data-driven and real-time decisions. But it is equally important for marketers to know the best way to integrate AI into their operations and campaign. Just like other use cases, AI marketing is still in its early stages and this also implies that it is facing several challenges. If you are thinking of integrating AI into your campaigns, then you need to be conscious of the challenges you may encounter along the way. Here are a few of such challenges and possible ways to deal with them. 1. Training Time and Quality of Data Presently, AI tools lack the ability to automatically determine the best action to take to enable them meet marketing goals. To enable AI tools to learn organizational goals, understand overall context, historical trends, customer preferences, and finally establish expertise, they need time and quality training. This does not just demand a lot of time, but also needs data quality assurances – the quality of data input used in AI training determines the quality of output. Failure to train AI tools with high-quality data that is representative, accurate and also timely will cause the tools to make low-quality decisions that are unable to reflect the true desires of the consumers and this will reduce the value of such tools. Focus on the quality of time and data used for the AI training to ensure quality output. 2. Privacy There has been an increased level of awareness of privacy issues with the likes of Facebook and Google facing several lawsuits regarding the violation of people's privacy. Regulating bodies and consumers are increasingly cracking down on the ways most firms utilize their data. It is, therefore, crucial for marketing teams to ensure the ethical use of consumer data and that they comply with relevant standards like GDPR. If you fail to do this, then you risk the effects which could result in reputation damage and heavy penalties. When it comes to AI marketing, privacy is a major challenge and if the AI tools are not programmed to adhere to stipulated legal guidelines, they may eventually exceed what is regarded as acceptable while making use of consumer data for personalization. 3. Getting Buy-In Marketing teams may find it difficult to showcase the value of AI investments to business stakeholders. It is often straightforward to quantify KPIs like efficiency and ROI, but demonstrating how AI was able to improve brand reputation or customer
experience is not always obvious. Bearing this in mind, marketing teams must ensure that they possess the measurement capabilities to enable them attribute some of these qualitative gains to AI investments. 4. The Ever-Changing Marketing Landscape The increasing popularity of AI in marketing as well as other industries also leads to a significant disruption in the daily marketing operations. This implies that marketers need to determine the different tasks that are likely going to be replaced by AI tools and the new ones that will be created. In fact, results from a study show that marketing technology will be replacing about six out of every ten current marketing analyst and specialists jobs. The smartest way to handle such displacements is to evaluate the ones that may likely be replaced and make the necessary adjustments.
How can you Use AI in Marketing? When deciding to leverage AI in your marketing campaigns and operations, the starting point is to have a thorough plan. The plan will ensure that your marketing teams lower the possibility of making costly mistakes and get the maximum value from the AI investment within the least time. You need to consider several key factors before you implement an AI tool for your marketing campaigns.
Set Goals Similar to other marketing programs, start by establishing clear goals and marketing analytics from the beginning. You and your team should identify areas within operations
or campaigns (such as segmentation) that AI could significantly improve. Once you have identified such areas, establish straightforward KPIs that will help reveal the level of success of the AI-supported campaign. This is even more crucial for qualitative goals like “improving customer experience.” Privacy Standards In the early stages of your AI program, ensure that your AI platform does not and will not at any point violate the conditions of acceptable data use all because of personalization. It is extremely crucial to establish privacy standards and program them into your platforms as required. This will enable you to comply with the necessary regulations while building consumer trust. Quantity and Sources of Data Part of the things that marketers require to get started with AI marketing is the availability of a vast amount of data. The data is required to train the AI tools in external trends, consumer preferences as well as other factors that may have a huge impact on the level of success of your AI-powered campaigns. You can obtain the data from marketing campaigns, website data and also from your CRM. Marketers can also include third-party data to supplement the one they have and this includes weather data, location data as well as other external factors that may be considered while making a purchasing decision. Hire a Data Science Talent The need for data scientists has witnessed a gradual increase over the years even as data has proven to be a major factor in business growth. Unfortunately, a good number of marketing teams do not have employees that possess the right AI and data science skills. This is also the reason why they find it hard to work with vast amounts of data that will provide insights. If organizations must start their AI marketing on the right track, then there may be a need to collaborate with third-party firms that can help in data collection and analysis. This will be useful in training AI programs and also enhance ongoing maintenance. Maintain The Quality of Data While making use of more data, machine learning programs will also learn how to make effective and accurate decisions. But if the available data is not free from errors and also not standardized, then the insights will be useful and AI programs may eventually make decisions that will negatively affect marketing programs. Before the implementation of AI marketing, it is crucial for marketing teams to coordinate with data management teams in addition to other business segments to establish processes for both data cleansing and maintenance. There are factors to bear in mind when doing this and they include: Completeness Accuracy
Timeliness Transparency Representativeness Relevance Choosing the Right AI Platform Another essential step in the process of kick-starting an AI marketing program is choosing the right platform or platforms. Marketers must be discerning to recognize possible gaps any program is attempting to fill and choose solutions strictly on the basis of their capabilities. The solutions you choose needs to align with the goals your marketing campaigns are trying to achieve. If you desire speed and productivity goals, then you will need tools with entirely different functionalities than the ones used for improving customer satisfaction. Depending on the algorithm a marketing team is using, they may have a lucid report regarding why a particular decision was made and the specific data that influenced the decision, however, algorithms functioning on a more advanced level may fail to provide definitive reasoning. There are several use cases of AI in marketing campaigns in different industries – entertainment, retail, financial services, healthcare, government, and several others. Every single use case delivers different outcomes which could be enhanced customer experience, improved campaign performance and greater efficiency in marketing operations.
ECONOMIC IMPLICATIONS OF AI
A look at recent advances in AI from an economic point of view shows that it can either help organizations to lower their cost of “prediction” or significantly enhance the quality of available predictions at the same cost. Of course, various aspects of decision making differ from prediction, nevertheless, AI's widely accessible, inexpensive, and improved prediction has the potential to transform businesses and industries. This is simply due to the fact that prediction is an input into most of our activities. More opportunities for making use of predictions have already emerged even as we continue to witness a drastic drop in the cost of AI predictions. Have you observed that prediction has been a part of human decision-making? Consider a doctor who in the course of medical diagnosis fills in the missing information regarding the possible causes of the symptoms a patient is having using the data provided about a patient's symptoms. Prediction actually involves the process of using available data to provide the missing information. So, when your eyes receive light after a room is illuminated due to light signals, your brain will instantly fill in the missing information of a label in the process of object classification. Since AI offers us prediction at a lower cost, we will certainly have a good number of applications since prediction remains a primary input in our daily decision-making activity which takes place all the time and virtually in all locations – our homes, in the office, while commuting, etc. While hiring, human resource managers make crucial decisions that can play a role in the level of success of the business. Also, managers of businesses make decisions regarding their investments, strategies and other less crucial ones such as the particular meetings they will attend and the things they should
say during the meeting. What about judges? Of course, they make extremely important decisions regarding the innocence or guilt of people, decisions on procedures and sentencing in addition to smaller decisions regarding a certain motion or paragraph. People make decisions regarding whether to marry or not; who to marry, whether to eat or not, what to eat, the type of song to play at any given time, etc. But there is one major challenge that every one of us must deal with while making all the decisions in our lives – the issue of uncertainty. But one thing that helps to reduce the level of uncertainty is prediction and this explains why as an input into all our decisions, prediction can usher in new and exciting opportunities.
AN IDEAL SUBSTITUTE FOR HUMAN PREDICTION One of the economic concepts that is often considered when discussing the economic impact of AI is that of substitution. Machine prediction has over the years proven to be an ideal substitute for human prediction. For example, a significant drop in the price of an item such as coffee will not only motivate people to purchase more of the commodity, but they will actually purchase fewer substitute products like tea. So, as machine prediction becomes more accessible and less expensive, they will also substitute humans who were previously involved in prediction tasks. Of course, you would have guessed – this will lead to a dramatic drop in labor related to prediction. This is undoubtedly one of the major impacts of AI on our workforce. Several years ago, the creation of computers implied that the number of people who would be employed to perform arithmetic as part of their tasks dropped. This also applies to AI as fewer individuals will be engaged in prediction tasks. One of the tasks that is already being taken over by AI is transcription which is the conversion of spoken words into texts. AI prediction is already filling in missing information more accurately and faster than humans involved in transcription.
Investments in AI Start-ups The startup investing space is currently expensive and crowded even as venture capitalists are trying to preempt each other with the goal of channeling their funds to innovative and hot companies before their competitors do so. Based on recent figures, the AI startup market seems to be hotter than most technology niches out there. Experts believe that the Microsoft-Nuanced deal is just a sign of a more competitive and active market for startups in the AI space. Investors are not just competing with one another to invest their funds in AI-powered startups and betting on a future that may not eventually arrive. They are convinced that the future is already here and there is available data to confirm it. In fact, records from a Signal AI survey which included 1,000 C-level executives reveal that about 92 percent of these executives feel that organizations should depend more on AI to enhance their decision-making process. Interestingly, 79 percent of those
surveyed disclosed that organizations are already leveraging AI to improve their decision-making process. The implication of the data from the survey reveals that space still exists for more companies to learn how to take advantage of AI-enabled software solutions. The implication of the percentage of executives that feel companies need to leverage AI to boost their decision-making processes is that there is a huge completely addressable market for startups coming out with software created based on the foundation of AI.
SECTION III: ARTIFICIAL INTELLIGENCE & NEW TECHNOLOGIES
Chapter 12: Artificial Intelligence & Big Data Key Takeaway If you must take advantage of machine learning and its capabilities, then you also require the right raw materials – big data. The value of big data is not really on the volume of data you may have, but precisely what you do with the data. Big data has become very valuable and seen as capital. You can definitely accomplish several business-related tasks by leveraging the synergy between AI and big data. AI's ability to identify data trends is only beneficial as long as it is capable of adapting to the fluctuations that exist in those trends.
I believe that the name “Big Data” already gives you an idea of what it means. It has to do with data that is just so fast, large or complex that processing it with traditional methods will be extremely hard or impossible. For several decades, businesses, organizations and government institutions have always stored vast amounts of information for various purposes and especially for analysis. However, in the early 2000s, the concept of big data received more attention as analyst Doug Laney defined the concept and this definition is widely accepted. He defined it based on three Vs; volume, velocity and variety.
Velocity: The rapid growth being experienced by the Internet of Things implies that businesses will gain more access to data at an outstanding pace and they need to handle it properly and timely. The need to deal with
such influx of data promptly is driven by sensors, RFID tags, and smart meters. Volume: There are multiple sources of data for organizations and this includes industrial equipment, social media, videos, business transactions as well as IoT devices. Storing all the data in the past would not have been an easy task but this task has been simplified with the availability of cheaper storage platforms. Variety: Organizations also collect data in all kinds of formats; it could be structured, numeric data commonly stored in traditional databases or unstructured emails, audios, financial transactions, videos and stock ticker data. Over the years, two more Vs have been added to the first three and they are Value and Veracity. When it comes to value, the intrinsic value of data is almost useless until it is discovered. It is equally crucial to consider the truthfulness of your data and the extent to which you can depend on it. The truth is that big data has become very valuable and seen as capital. In fact, a significant portion of the value that the biggest tech firms around the globe offer is all obtained from the data they have. They are always analyzing the available data to create innovative products that are more efficient. The cost of data storage over the years has dramatically reduced and this is mainly because of the recent technological advancements that have lowered the cost of data storage and computer. So, it is quite cheap to store more data now and as more volumes of big data become more accessible and cheaper, it is also having a remarkable impact on decision making as you can now make more accurate business decisions. Although analyzing big data is entirely a different benefit, there are other ways to find value in it. The journey of discovering the value in big data is a process that needs business users, executives and insightful analysts who can ask the appropriate questions, identify patterns, make assumptions based on the information available and predict behavior.
BRIEF HISTORY OF BIG DATA While the concept of big data is not an old one, we can trace its origin to the 1960s and 70s. It was at this point that the first data centers started emerging. But around 2005, individuals, governments and businesses started to realize that online platforms like YouTube and Facebook were generating vast volumes of data. Consequently, Hadoop, an open-source framework that was developed to help store and analyze big data sets emerged in 2005. Hadoop and other open-source frameworks played a vital role in the early growth of big data since they ensured that people could easily work with data and store it at a cheaper rate. But things have since changed because the volume of big data has jumped over the years. Now, IoT is also a major source of big data and this will continue to increase even as more objects get connected to the internet.
Why Big Data? Big data is extremely important and in fact, it is indispensable in the development of new technologies. Its value is not really on the volume of data you may have, but precisely what you do with the data. You can analyze data from any source to get the right answers that will enable you enjoy various benefits:
You can definitely accomplish several business-related tasks by leveraging the synergy between AI and big data. Some of the tasks you can accomplish include: Recalculating risk portfolios within a short time. Ascertaining the major causes of failures, challenges and defects almost in real-time.
Being able to identify fraudulent behavior even before your organization is affected. Taking customers' buying habits into consideration when generating coupons at the point of sale.
THE RELATIONSHIP BETWEEN BIG DATA AND AI Individuals, businesses, enterprises and government organizations all have access to information regarding the likes, dislikes, personal preferences, consumer habits and activities. Several decades ago, this was completely impossible, but thanks to advancements in AI and other emerging technologies. Various sources of potentially insightful data such as customer relationship management (CRM), product reviews, shared content, social activity, loyalty/rewards programs, tagged interests, online profiles and several others also contribute to the big data pool. So, what is the relationship between AI and big data? Consumer Information Perhaps, AI's most significant feature is being able to learn very fast. AI's ability to identify data trends is only beneficial as long as it is capable of adapting to the fluctuations that exist in those trends. By being able to recognize anomalies in the data, AI can predict the aspects of customer feedback that are regarded as meaningful and make the necessary adjustments. The primary reason why big data and AI appear to be inseparable rests on artificial intelligence's ability to work with data analytics. Machine learning, as well as deep learning, are all extracting information from all data inputs and making use of the information to create new rules for future business analytics. It is also crucial to bear in mind that it is usually a big issue when the big data being utilized is not a good one. Of course, the results will also turn out not to be good too. Business Analytics Available data from research suggests that combining AI and big data can lead to the automation of 80 percent of every physical task, 70 percent of data processing tasks, and the last is data collection work which is 64 percent. What this means is that the combination of both concepts has what it takes to positively influence the workplace while making remarkable contributions to business and marketing activities too. Supply chain operations depend on data and this is why they are more interested in developments in AI to get real-time insights on client feedback. Businesses and corporate organizations can function based on the flow of new and useful information. Before you can run data through a deep learning or machine learning algorithm, you need to agree on a methodology for data mining and structure. To do this, you will require the services of professionals in business data analytics. Organizations that are serious about acquiring sufficient insights from data analytics usually value business data professionals.
BENEFITS OF THE SYNERGY BETWEEN BIG DATA
ANALYTICS AND AI ALGORITHM How exactly are businesses benefiting from the synergy between the two technologies? There are several ways businesses can benefit from combining AI and big data, but I will focus on the areas that are presently more common in business and society. Banking and Securities The banking sector leverages the synergy between the two technologies to help monitor financial market activities. One of the strategies the Securities Exchange Commission has adopted to help prevent illegal trading activities in financial markets is to leverage network analytics as well as natural language processing. They obtain trading data from predictive analysis, risk analysis and high-frequency trading. They also use the combination of AI and big data for card fraud detection, customer data transformation, early fraud warning, archival and analysis of audit trails, for reporting enterprise credit and several others. Agriculture Big corporations and agricultural organizations also increase their monitoring capability courtesy of this synergy. Earlier in chapter 10, we saw that farmers use AI for counting and monitoring their produce at every point of their growth until maturity. This enables them to recognize defects or weak points even before they increase and spread to other areas resulting in huge losses. They extract data obtained from drones or satellite systems which will be analyzed by AI solutions to enable farmers make informed decisions. Communication, Media and Entertainment A lot of data is obtained from various social media websites and businesses leverage AI to help analyze such information and customer behavioral data to enable them create customer profiles. These profiles will serve as a tool for measuring content performance, recommending content and creating content that is suitable for a diverse target audience. Education AI also synchronizes with big data analytics to achieve a wide range of goals. For instance, they use both technologies to track and analyze the online activities of a student in the school system. They can analyze how long the student spent on each page on the system and each student's overall progress over time. The educational sector also syncs AI with big data to enable them measure how effective the teachers are by analyzing their performance based on the number of students, student demographics, student aspirations, different courses, behavioral patterns, etc. For more on how different sectors benefit from AI and big data, check out section II.
Leveraging the AI and Big Data Synergy Considering the benefits of using AI and big data, one factor that will play a great role in taking advantage of these benefits is to search for the right tech partner. This is truly critical for organizations that intend to upgrade their data analytics capabilities. If you must take advantage of machine learning and its capabilities, then you also require the right raw materials – big data. You must collect, integrate and organize the available data. Here are some suggestions to help you utilize this amazing technology. Broaden your Horizons You just have to keep an open mind and try to limit what you really want from the available data. Did you know that experts in this field usually get more insights into the data than the average customer can ever think of? This is because they do not make assumptions; instead, they usually have an open mind. It is quite essential to be clear on where and how to unearth value amid the complexity of the available data. Patience is Crucial A prerequisite for the effective integration of new data analytics tools is to analyze the nature of internal systems and data. This usually involves a combination of both the structured data and unstructured one. You need to be intimately familiar with the nature of internal systems. Begin Small and Build Even though the initial analytics task is visualization, augmentation or automation, the best thing to do is to begin with a small goal and gradually build from there. Scoring a quick win will help to showcase value and allow you to establish a foundation of trust and technical capacity with the possibility of exciting and new achievements. Keep it Simple Sometimes, we attempt to make use of trendy and great solutions (that often require more computation power) in a bid to resolve most of our data analytics challenges. However, we end up discovering that we could have also achieved the same results with simpler algorithms. So, you need to also apply the “start small” principle to technology too. Avoid complicated, expensive and computation power-hungry solutions.
Chapter 13: The Artificial Intelligence of Things (AIoT)
Key Takeaway The entire AI and autonomous vehicles system also involve the integration of AI and IoT devices While the Internet of Things assists us in re-imagining daily life, AI is a technology that functions as the driving force behind IoT achieving its full potential. IoT has to do with the billions of physical smart devices and objects located in different parts of the world that are all connected via the internet and also obtain and share data among themselves. IoT has progressed from just small objects to a bigger scale as we now have emerging smart cities in different regions with sensors that will enable us to have a better understanding of our environment and also control it. The incorporation of AI into IoT applications will undoubtedly lead to increased operational efficiency because machine learning can easily process data obtained from IoT devices and make predictions in ways that humans are not capable of doing.
If you have been following the development of new technologies in the past decade, you would agree with me that Artificial Intelligence and the Internet of Things are two separate and powerful technologies that have witnessed increased popularity and adoption. But the combination of both technologies will create the Artificial Internet of Things (AIoT). I will not assume that you already know what IoT means.
SO, WHAT EXACTLY IS IOT? It has to do with the billions of objects or physical smart devices located in different parts of the world that are all connected via the internet and also obtain and share data among themselves. The creation of super-cheap computer chips in combination with the ubiquity of wireless networks has made it possible for us to integrate anything such as an airplane or even a small pill into the IoT. Getting all these devices connected and attaching sensors to them actually adds a higher level of digital intelligence to all of them and also empowers them to communicate in real-time without human interference. Without the addition of these sensors, these devices would have been simply regarded as dumb hardware devices. The truth is that IoT is transforming our world into a more responsive, and smarter one – IoT is simply melding the physical and digital universes.
Interestingly, we can now transform any physical object into an IoT device as long as we can connect the device to the internet to enable us to communicate or control it. So, an excellent example of an IoT device is a smart thermostat placed in your office, a motion sensor or even a lightbulb that you can switch on with the app on your smartphone. In fact, something as fluffy as your child's toy can be an IoT device or a driverless car. Larger objects like a jet engine can also be filled with several smaller IoT components which could be thousands of sensors all obtaining and transmitting data to ensure the object is functioning efficiently. IoT has even progressed from just objects to a bigger scale as we now have emerging smart cities in different regions with sensors that will enable us to have a better understanding of our environment and also control it. I need to point out also that the
term IoT is primarily used for most devices that we would not ordinarily expect to possess internet connection but are capable of communicating with the network without human control. This implies that a PC is not really regarded as an IoT device and this also applies to a smartphone despite the fact that these kinds of devices are filled with several sensors. However, we can still regard things like a fitness band, a smartwatch or any wearable device as IoT devices. When AI is integrated, to create AIoT, then this is how it will look like.
Why IoT and AI? Well, you would agree with me that our world is fast changing especially as more people, countries and organizations are adopting IoT. With the help of IoT, we can now capture vast amounts of data from different connected devices. But you would agree with me that being able to effectively collect, process and analyze the volume of data we get from billions of IoT devices is quite a complex task. If we must experience the full potential of IoT devices, then we need to integrate new technologies. Undoubtedly, our industries, economies and businesses will be transformed with the integration of IoT and AI. While the internet of things assists us in re-imagining daily life, AI is a technology that functions as the driving force behind IoT achieving its full potential. Staring from one of the most basic applications of merely keeping track of our fitness level to a more intense use case across industries, businesses and cities, the increasing partnership between IoT and AI implies that we may witness a smarter future sooner than we can ever imagine. The two superpowers of innovation are IoT and AI. Interestingly, as IoT devices interact or communicate daily, they collect and exchange information regarding all their activities online and this serves as a great source of data. Records show that on a daily basis, IoT devices are capable of generating 1 billion GB of data. Also, available data from Statistica shows that we will experience a dramatic jump in the number of IoTconnected devices globally from 13.8 billion that is the expected figure in 2021 to about 30.9 billion by 2025 as seen in the graph below.
Figure 37: IoT & non-IoT active device connection globally 2010-2025 by Statistica (in billions)
AI is now playing an increasing role in IoT applications and deployments and in the past two years, there has been a remarkable jump in the number of investments and acquisitions in startups that merge IoT and AI. In fact, major vendors of various IoT software are now providing integrated AI capabilities like machine learning-based analytics. In this context, AI's value lies mainly in its ability to extract insights from data. Machine learning helps to automatically recognize patterns and even identify anomalies in the data generated by smart sensors and devices. This information could include sound, air quality, pressure, vibration, temperature and humidity. When we compare machine learning to traditional business intelligence tools that assist us in monitoring numeric thresholds to be crossed, its strategy is capable of making operational predictions that are 20 times earlier and more accurate too. The truth is that the value of the data collected from the network of IoT devices would have limited value without AI-powered analytics. On the other hand, we may never realize the true potential of AI systems, especially in business settings if we do not have the influx of IoT-generated data. I guess you are now getting the true picture of why AI is required to enjoy the full potential benefits of IoT? The powerful combination of both technologies will certainly transform industries and enable businesses to make intelligent or smart decisions courtesy of vast amounts of data available every day. You can see AI as the brain that helps in making smart decisions that control the entire system while IoT is the body or the digital nervous system and the combination of the two can create new business models, new revenue streams and services as well as new value propositions. It is a combination that offers us intelligent and connected systems that can self-correct and self-heal themselves.
Well, the only natural thing is that the more the number of these devices increases, the more data we will have too and that is precisely where AI comes in. AI can provide its learning capabilities to the connectivity of the Internet of Things. Find below, three emerging technologies that will empower IoT are: 1. Artificial Intelligence: Systems and functions that are programmable and can allow devices to learn, reason and even process information the way humans do. 2. 5G Network: This is fifth-generation mobile network with near-zero lag for real-time data processing with high speed. 3. Big Data: Vast amount of data that are now being processed from countless internet-enabled devices.
Figure 38 Emerging Techs Powering AI
The truth is that interconnected devices are changing the way humans communicate with their devices in virtually every location – at home, in the office, on the road, etc. and in this process, leads to the creation of what is now regarded as the Artificial Intelligence of Things (AIoT). The combination of AI and IoT results in AIoT – a smart and connected network of devices that are communicating seamlessly with each other courtesy of powerful 5G networks. Indeed, this will be a quicker and better way to unleash the power of data.
THE ENTIRE WORKFLOW OF AIOT The first generation of cloud-based IoT delivered five major capabilities that have so far proven to be extremely useful. These capabilities include:
Figure 39 Internet of Things
However, the melding of AI and IoT further adds a new and crucial ability to these connected devices – the ability to take actions without human intervention. AI acts on the patterns and correlations obtained from telemetry data and by taking the right
actions based on the available data, AI has filled a critical gap that once existed. It simply transforms into some kind of brain for connected devices. This is indeed amazing.
Figure 40 Artificial Internet of Things
Here is how the combination of the two powerful technologies work: Data Collection: This is the first step in the AIoT workflow and it involves data creation and collection by connected IoT devices. The data is obtained with the aid of sensors that are attached to different devices to collect multiple datasets. Data Storage: As soon as these sensors collect sufficient data, it will be stored in the cloud. Installing hardware for storing vast amounts of data is not as easy, cheap and efficient as storing the data in the cloud. Data Processing: Once the cloud servers store the data, then the next stage will be the processing of data. This process is divided into stages based on the extraction and cleaning of data. Data Projection: The processed data will then be communicated via various networks, aggregated and analyzed into actionable information. Action Phase: This is the point where businesses and organizations put the actionable information collected into practical use. Control: Based on the recommendations provided by big data systems, device operators as well as field engineers can now control IoT devices.
Segments AIoT is Making Tremendous Impact AIoT is actually making a tremendous impact in five major segments:
Smart homes: Did you know that homes that respond to most of our requests are no longer restricted to what most of us are used to in science fiction. Presently, smart homes now have the capabilities to take advantage of lighting, different appliances, electronic devices and several others and learn different habits of the homeowner to help create automated “support.” Interestingly, such seamless access opens the door to extra benefits of improved energy efficiency, especially as our world is working toward dealing with climate change. Consequently, between 2020 to 2025, we are likely going to witness a compound annual growth rate (CAGR) of 25 percent reaching $246 billion. Wearables: Most available wearable devices like smartwatches now consistently monitor and keep track of user habits and preferences. This has also resulted in outstanding applications in sectors such as health tech, sports, and fitness. The top research firm, Gartner estimated that the global wearable device market is likely going to witness over $87 billion in revenue by 2023. Smart City: With the influx of people from the rural to urban areas, the need for more safer and convenient places to live in will also increase. Smart cities are increasingly keeping up with the influx of funds channeled toward improving energy efficiency, transport and public safety. Already, the real-life use of AI in things like traffic control is increasingly becoming popular. For instance, some of the world's most traffic-congested roads in places like New Delhi are leveraging AI to make real-time dynamic decisions regarding traffic flows with their Intelligent Transport Management System (ITMS). Smart Industry: The fourth one is the increasing number of industries that are now depending more on digital transformation to increase their level of efficiency and lower the rate of human error in manufacturing, mining and several other industries. From supply chain sensors to real-time data; smart devices are now assisting in preventing costly errors in industry. Estimates from Gartner indicate that more than 80 percent of enterprise IoT projects are going to integrate AI into their systems by 2022. Retail Analytics: Another sector that has benefited immensely from AIoT is the retail sector. Things such as the management of staff, inventory and the general operations of retail outlets have been made simpler courtesy of the combination of IoT sensors and AI. For instance, IoT devices such as cameras function as sensors that monitor the movements of staff and customers in a retail store. This is one way to acquire data regarding the peak hours of potential customers to enable employees prepare properly for them. Also, the data collected helps to not only manage and create effective strategies for retail outlets but also help predict the movement of a customer until they make a purchase. Even when a customer fails to make a purchase at the end of the day, it provides insight into the specific point where the customer quits and explores ways to control or improve on it. In chapter five, we talked about AI and autonomous vehicles, but did you know that the entire system also involves the integration of AI and IoT devices? Of course, AIoT plays
a significant role in monitoring a fleet's vehicles, tracking vehicle maintenance, identifying unsafe driver behavior, fuel cost reduction, and several others. Autonomous vehicles like Tesla's autopilot systems gather data regarding driving conditions from radars, GPS, sonars, and cameras and then make decisions based on the data obtained from IoT devices to ensure the smooth operation of autonomous vehicles.
STRIKING BENEFITS OF IOT AND AI INTEGRATION The integration of AI and IoT will offer businesses, industries and organizations immense benefits and here are the major ones.
Figure 41 AI Benefits
1. Improves Customer Relationship
The advantages of AIoT are not just limited to employees; when it is properly implemented, it can enhance customers' experience. AIoT will help prevent businesses from engaging in the guessing game while attempting to understand precisely what customers want. An increasing number of enterprises are now analyzing the massive volumes of data from IoT devices with AI to help understand what customers truly need in real-time. With the availability of new innovative technologies and big data, businesses are well equipped to develop new products and services that will perfectly suit customer needs. All through this process, AI and IoT play a remarkable role because many companies are now automating the whole process of organizing the data available to ensure that their response to customer needs is relevant and quick. With devices that understand user preferences and can make adjustments accordingly, customer experience will improve dramatically. 2. Enhanced Operational Efficiency The incorporation of AI into IoT applications will undoubtedly lead to increased operational efficiency because machine learning can easily process data obtained from IoT devices and predict in various ways that humans are not capable of doing. AI can now calculate large sets of data within a shorter time and make recommendations in line with the calculations made to ensure that the workplace is more efficient. This explains why more organizations are embracing these emerging technologies to boost their productivity. With AIoT, companies can now identify inefficiencies and know areas where improvements are needed. 3. Cost-Effective As the world continues to battle different waves of the COVID-19 pandemic, more organizations are now under serious pressure to maintain their level of productivity without increasing their costs. One of the best solutions for organizations is AIoT because it can help analyze data quickly and determine the areas of operation that are very expensive to maintain. Businesses can save costs when they have good access to quality data and not necessarily at the expense of productivity. When they discover cost drivers, enterprises can easily implement changes aimed at cutting down their expenses. Leaders are now empowered to eliminate unnecessary spending and optimize business processes by leveraging emerging technologies. 4. Very Safe and Secured AIoT also offers an additional layer of security which helps to lower workplace accidents. By pairing machine learning with machine-to-machine interaction, businesses can now predict possible security risks and provide an automated response. Also, they can take advantage of connected sensors to ascertain potential environmental safety hazards that employees do not know of. Applications that are pairing AI and IoT can assist businesses to predict and properly deal with several risks and threats like cyber threats, worker safety, financial losses and several others.
5. New Products and Services AIoT also has the potential to pave the way for developing new and powerful products and services. Obtaining and analyzing vast amounts of data assists businesses to make better decisions. 6. Prevents Unplanned Downtime Equipment breakdown in certain sectors such as industrial manufacturing or offshore oil and gas often leads to costly unplanned downtime. Businesses can now predict equipment failure in advance by leveraging predictive maintenance of AI-enabled IoT. This enables them to initiate orderly maintenance procedures and eliminate the costly implications of downtime. According to Deloitte, AI-enabled IoT leads to: 20-50 percent drop in the time spent on maintenance planning. 10-20 percent increase in the availability of equipment and uptime. 5-10 percent drop in the cost of maintenance.
THE FUTURE OF AIOT
Figure 42: The future of AI
AIoT has what it takes to test the amount of data that any given device can process as well as its future advancements. We are certainly going to witness a more connected future courtesy of AIoT innovation. The future of AIoT can be classified into three major areas: Edge computing: Courtesy of edge computing, data processing can actually be carried out on the computer, so there would not be the need to send data to remote data centers. Of course, the current technology is just limited to smart thermostats and appliances, we are likely going to see advanced gadgets in the future. This includes autonomous fully-functioning vehicles as discussed in chapter five and home robots. Voice AI: Also, part of the significant developments we are going to witness courtesy of AIoT, is an improvement of voice AI in devices such as mobile phones, speakers, and several others. Currently, what we have are 1D smart speakers that can obey the voice command of a person speaking. But in the future, we are likely going to see speakers having the natural language processing ways of understanding the user well. Already, we have 2D Voice-activated LCDs that display information but the successful implementation of AIoT may eventually usher us into an era where ePayment voice authentication will be made possible. AIoT and Vision AI: Most people are now familiar with vision AI – AI devices that help to detect massive objects in 4k resolution. But the combination of IoT and vision AI will enable this technology to analyze video on the edge. Experts predict that the quality of the display may increase from 4K to 8K. The convergence of AI and IoT will undoubtedly be the future of industrial automation and every industry including finance, manufacturing, aviation, automotive, supply chain and healthcare will experience the impact of Artificial Intelligence of Things.
Chapter 14 : Artificial Intelligence & Robotics
Key Takeaway In 2020, the value of the Global Robotics Market was about $27.73 billion and it is estimated that by 2026, the market will hit $74.1 billion which translates to a CAGR of 17.45 percent – Mordor Intelligence. AI robot is like a bridge between AI and robotics – AI-programs controlled robots. Robots are programmable machines that are capable of executing several actions at the same time and semi-automatically. The outbreak of COVID-19 has dramatically enhanced the market for professional service robots. With the emergence of industry 4.0, we will witness the linking of real-life factories with virtual reality which experts believe will play a remarkable role in global manufacturing.
Did you know that by 2050, the global population of humans older than 65 years will increase by 181 percent? This also implies that by 2050, this population will comprise 16 percent of the global population. One of the primary factors that have significantly driven the growth in deploying robots in assistance, healthcare and domestic applications is the aging population. However, Asia is the region that has recorded the strongest growth in robotics and of course, as you would have already guessed, China is the top marketplace in the world and they are closely followed by South Korea.
ARE AI AND ROBOTICS THE SAME? Is AI part of robotics or is robotics part of AI? If the two are different, then what exactly is the difference between the two? For the average person out there, artificial intelligence and robotics may appear to be the same thing. So, this section will be answering some of the questions most people have regarding AI and robotics. People usually get AI and robotics mixed up especially with the increased usage of robots to imply virtually all kinds of automation but they serve two different purposes as you will find out in this chapter. So, to answer the first question, AI and robotics are two different technologies – they are completely separate. The diagram below is a simple illustration of how the two are related.
The two fields overlap in one small area which has now resulted in artificially intelligent robots. This area where the two overlaps is also responsible for the confusion that most people have regarding the two concepts. To have a clearer picture of what both concepts mean, we need to understand what robotics means. We have already defined
AI, but you can refer to chapter one to take a second look at its definition.
Robotics It is actually a field of technology that is focused on the development of physical robots. This definition is straightforward but what are robots? They are programmable machines that are capable of executing several actions at the same time and semi-automatically. The major attributes of a robot include: Robots are programmable They communicate with the physical world through actuators and sensors Robots are often autonomous or semi-autonomous The reason why I mentioned that robots are “often” is that some are not autonomous and an excellent example of robots that are controlled by humans are telerobots. A collaborative robot (cobot) is a type of non-intelligent robot. If you program it to simply pick up something from the floor and keep it elsewhere, it will keep repeating the same action until you turn it off. You can see this as an autonomous action since the robot functions without human interference after its programming. This kind of task does not also require any human intelligence since the robot will only do exactly the same thing repeatedly. You can see most industrial robots as non-intelligent as they function just the same way.
Now, artificial intelligence as we have discussed before enables us to develop a computer program that can complete tasks that would have needed human intelligence. So, some AI algorithms deal with logical reasoning, language understanding, perception, learning and problem-solving. Now, when AI is programmed to help control robots, the AI algorithms are simply one aspect of the entire robotic system which could comprise non-AI programming, sensors, actuators and other hardware. In case you are wondering what differentiates AI from non-AI programs; the answer is straightforward. Non-AI programs are designed to execute a defined sequence of instructions. On the other hand, AI programs imitate a certain level of human intelligence.
Artificial Intelligence in Robotics (AI Robots) You can see AI robots like some kind of bridge between AI and robotics – AI-programs
controlled robots. Remember, based on the previous description of robots, they are simply not intelligent. Before the recent advancements in AI, most industrial robots were just programmed to execute several repetitive movements and they do not need artificial intelligence. However, you would also agree with me that the functionalities of nonintelligent robots are limited. The truth is that AI algorithms are essential if we must have robots that can handle more complex tasks. Apart from industrial robots, there is also a different type of robot that may confuse people and that is software robots. This class of robots are computer programs that operate independently to execute a virtual task. A chatbot is an excellent example of a software robot. Also, search engine bots belong to this class.
ROBOTICS MARKET OVERVIEW
Figure 43: Sales value of service robots domestic use & global 2018-2020 (In USD billion)
This market overview of robotics will give you a better understanding of the level of growth the sector is witnessing and is expected to witness in the future. You will also understand some of the factors that are behind the increased interest of different industries in robotics. According to Mordor Intelligence, in 2020, the value of the Global Robotics Market was about $27.73 billion and it is estimated that by 2026, the market will hit $74.1 billion which translates to a CAGR of 17.45 percent. Several factors have been identified to be responsible for triggering the increasing demand for robots. One of such factors is the workforce shortage courtesy of lockdowns caused by the global COVID-19 pandemic. Also, the more traditional industries are now upgrading their systems. Records from the
National Bureau of Statistics data show that China's industrial robot production in June 2020 jumped by 29.2 percent year-on-year and within the first half of 2020, they have already produced 20,761 units. Increased demand has also been identified as one of the reasons why more investments have been recorded in the robotics space. The World Robotics report provided by the International Federation of Robotics (IFR) reveals that demand for robots has been propelled by investment in new car production capacities as well as the modernization of industrial spaces. Across the major economies, the need to develop energy-efficient drive systems as well as an increased level of competition, especially in the car manufacturing sector has played a significant role in increased interest in robotics. Perhaps the industry that many believe to be among the most critical applications of industrial robots is the automotive industry. It has largely contributed to a rise in investments in industrial robots in different locations around the world. In fact, in 2020, BMW Ag entered into an agreement with KUKA to provide about 5,000 robots. These robots were to be deployed in new factories and production lines in different locations around the world. Also, KUKA disclosed that they would be deploying the robots to various BMW Group production sites and they will aid in the manufacturing of existing and future vehicle models. It is interesting to note that many companies are now embracing robotics automation in their warehouses in a bid to cut down on the money spent on labor. This is why the operational stock of industrial robots is projected to rise from 2,400 in 2018 to 3,800 units by 2021. In one of Alibaba's warehouses, they have upgraded to robotic labor and you would be surprised to know that this helped to cut down on labor workforce by 70 percent and at the same time opened up new opportunities for highly-skilled employees. You will learn more about how AI will impact the labor force and our world in chapter 16. What about Service robots? Well, they are also not left out as the outbreak of COVID-19 has dramatically enhanced the market for professional service robots. There has been a steady increase in the demand for robotics logistics solutions in warehouses and factories, robotics disinfection solutions and robots designed for home delivery. Record from IFR reveals that in 2020, service robots in the medical field is estimated to have about 12,000 units globally. Available data from NIOSH shows that in the United States alone, health workers have the most hazardous industrial job and they have the highest rate of nonfatal occupational illnesses and injuries. This is not very different from other locations around the globe, especially as we have witnessed the havoc caused by the pandemic. Current estimates show that 6,000 surgical robots have carried out about one million operations in different health institutions around the world. Top organizations such as the University of Michigan and MIT are now exploring ways to create small and compact robots that will be utilized in the medical sector.
Although medical robot system-assisted surgeries have been on the increase, the increased number of such robots has resulted in an increase in the rate of product innovation in the market. So, companies like Robot Aps have created and launched what they call a rehabilitation robot that can assist health professionals in managing bedridden patients. This alone will help lower the dependence on nurses for several tasks like heavy lifting and at the same time reduce the rate of injuries associated with such tasks. Apart from robots for the manufacturing and healthcare industries, more robots are now being produced for personal and domestic purposes. They are actually being massproduced and they belong to the class of household robots. Examples of such robots include lawn-mowing robots, entertainment robots, and floor cleaning robots. In 2019, the total number of service robots designed for domestic and personal use increased to over 23.2 million. The top players in the robotics field include: Kuka Ag ABB Ltd Fanuc Corporation Yaskawa Electric Corporation Denso Corporation It is expected that integrating AI with industrial robots will grow at a fast pace between 2018 to 2026 and this growth will be largely driven by the increasing demand by manufacturing companies. During TechCrunch's Robotics and AI Sessions event in April 2019, Fanuc, a top player in the industrial robotics industry revealed a new AIbased tool. Their focus is on ensuring that it is easier to train robots and this will further ensure that a wider range of industries can now access automation. The function of Fanuc's new AI tool is to educate robots on how they can accurately pick the right objects from a bin using sensor technology and simple annotations. This new AI tool will drastically cut down on the training process by some hours. In the AI and robotics space, the top players include: 4. 5. 6. 7. 8.
Veo Robotics, Inc. Vicarious AI NVIDIA Corporations Neurala, Inc. IBM Corporation
RECENT ADVANCEMENTS IN ROBOTICS Robots have undoubtedly become an essential part of our daily lives – whether you are aware of it or not, they have at some point helped you out with certain tasks. Here are some of the things they can do now. Customer Service In Poland, ING's robotics team came out with a service known as SAIO. It is an AI-
enabled solution that allows small and medium-sized businesses to robotize their tasks. Businesses can use SAIO to mechanize financial processes and also use it in other aspects of their business where administration is required – HR, logistics, etc. Reading Robots You would agree with me that this is a bit different. We have always used software bots that can automate algorithm-based, repetitive computer tasks just by imitating the way we work with various applications. But scientists have taken things a step further by creating smarter robots that are capable of reading. One of the smart robots that ING has developed recently is an intelligent content service. Developers have trained this class of robots to recognize information in documents such as payslips, invoices, etc. through AI algorithms. This service makes things easy both for service providers and customers. In fact, the robot can not only read documents, but can also process chats, photos as well as other content automatically. Cleaning Robots Most homes and businesses are now getting accustomed to frequently cleaning their space and this is one of the lessons that COVID-19 has taught us. Also, we now do the cleaning by ourselves and observe social distance. But things are fast changing as robots can now clean and even disinfect hospitals and homes. Companies like Xenex, Puro Lighting, Surface, UVD Robots and Tru-D are now making use of ultraviolet-C light to kill different types of bacteria and viruses. Subway cars and buses in New York are now being cleaned by Puro's UV lamps while Milagrow in India has come out with three new robots that can clean homes simply by pressing a button. The three robots include Milagrow Seagull, Milagrow iMap 10.0 and Milagrow iMap Max. In fact, the three robots clean both the floor and themselves. We also have robots that can now open drawers, open and close doors, move objects and pick up items without human interference or assistance. Others can now sanitize a room with chemical hydrogen peroxide spray and UV light. Robot Cooks Among the most surprising new technologies are robots that can prepare a meal for you. This type of robot can make a cup of coffee for you and even flip a burger. For instance, in India, Rebel Foods prepare food by leveraging the fusion of automation and software robotics. It makes use of robotics-led smart friers that are capable of identifying the precise shape of food and then automatically regulate the oil temperature based on the shape – all without assistance from humans. Also, the firm makes use of a SWAT machine which is a Visual AIQC Machine. The name of the machine represents size, weight, appearance and temperature. The machine scans every food that is placed on it and either accepts or rejects them. Some robots can also prepare burgers and give other employees more time to either clean their restaurants or take orders online. Presently, developers now leverage machine learning to improve the functionalities of robots using huge volumes of data. The performance of robots has continued to
improve especially with more precise machine learning processes. Among the functions that are now being integrated into robots include motion control, computer vision, grasping of objects, ability to recognize physical and logical data patterns and also take action accordingly and several others. Sensors attached to robots enable them to sense their environment and it functions like the major sensors that humans possess. In fact, robots are empowered with the combination of different sensing technologies; they have different sensors for empowering robots with sensing technology that can enable robots to function in an uncontrolled and ever-changing environment. Examples of some sensors used in robotics include: Ultrasonic sensors Millimeter-wave sensors Time-of-flight (ToF) optical sensors Vibration sensors Temperature and humidity sensors Self-Aware Robots There are several reasons why robots have not been able to mimic humans and one major barrier is that they lack “proprioception.” The term has to do with having a sense of awareness of muscles and body parts. You can as well see it as the “sixth sense” which enables humans to coordinate their movements. Over the years, robots have been empowered with most of the major human senses – a sense of sight via cameras, they can hear via microphones and a sense of smell and taste via chemical sensors. Unfortunately, they are yet to acquire the ability to perceive their body. However, roboticists are working toward achieving this goal by making use of machine learning algorithms and sensory materials.
THE FUTURE OF ROBOTICS AND AI What should we expect in the future? Well, with the emergence of industry 4.0, we will witness the linking of real-life factories with virtual reality which experts believe will play a remarkable role in global manufacturing. As we overcome various challenges on the way such as data incompatibility, and system complexities, more manufacturers will increasingly make use of robots in their factory operations. In anticipation of this increased demand, robot manufacturers are also coming out with commercialized new service models that are based on real-time data obtained from sensors embedded in robots. Predictions from analysts suggest that there will be rapid growth in the market for cloud robotics – data from a robot will be compared to the one from other robots either in different or same locations. These connected robots can actually execute the same tasks courtesy of the cloud network which will also be useful for optimizing parameters of the movements of the robots like force, angle and speed. Undoubtedly, the emergence of big data in manufacturing has the potential to redefine the industry
boundaries that currently exist between manufacturers and equipment makers. Another future trend in the robotics space has to do with the efforts made by robot manufacturers to introduce leasing models, especially to further increase adoption by small and medium-scale manufacturers. One major trend for this market trend is “simplification.” Already, the development of smarter solutions has been driven by a dramatic increase in the demand for robots that organizations can easily program and use. Such robots are actually more useful for industries that currently do not have specialized production engineers. This means that companies that create robots need to create easy-to-use robots that organizations can seamlessly integrate into their existing infrastructure and operate them in standard production processes. The kind of robots that can easily facilitate the deployment of industrial robots in different industries are the uncomplicated types. They will help sustain flexible manufacturing and efficiency in operations.
Chapter 15: The Disruptive Nature of AI & Blockchain Integration
KEY TAKEAWAY The combination of AI and blockchain technology can help improve how we work with data. Machine learning can help enhance the deployment of blockchain apps and help predict future possible system breaches. Blockchain technology is revolutionizing the financial industry by eliminating the need for intermediaries in cross-border transfers and provides fast and convenient fund transfer platforms. Blockchain is proving to be useful in several industries, but the financial industry actually accounts for over 60 percent of blockchain's global market value. Blockchain has the capability to store all the decisions that AI systems make data point-by-data point and can also ensure that they are readily available for analysis. Among the main drivers of innovation in society today are artificial intelligence and blockchain. We are witnessing radical shifts in virtually all areas of our lives courtesy of the two technologies and based on available predictions, both of them are expected to contribute trillions of dollars to the global economy. In line with what we have discussed so far – charming assistants that are capable of making appointments on behalf of people via natural conversations, self-driving vehicles, enhanced industrial robots, etc. – you would agree with me that indeed this is the future we talked about a few decades ago. The emergence of new content and economy-sharing platforms also implies that there would no longer be a need for users to depend on “unreliable intermediaries,” like Equifax, Yahoo, and Facebook. But have you ever imagined how things will turn out when these two disruptive technologies are combined? Well, this is what we shall be examining in this section. As we have always done, we need to first understand the two technologies before looking at various ways we can benefit from their integration. I believe you already understand what AI means, so I will go straight to blockchain technology.
UNDERSTANDING BLOCKCHAIN Did you know that corporations will spend about $20 billion annually by the end of 2024 on blockchain technical services? In fact, it is estimated that as early as 2018, 90 percent of US and European banks were already exploring the potential of blockchain
technology. While blockchain is proving to be useful in several industries, the financial sector actually accounts for over 60 percent of blockchain's global market value. The largest use case of this new technology is in cross-border payments and settlements. This is a public ledger that is shared and agreed upon by every user on the distributed network. Just like the way we write down records of transactions on different pages of a book, records are stored in blocks along with hash values and timestamps. Each of these blocks is connected to previous ones and this eventually creates a chain – blockchain. Blockchain offers numerous benefits courtesy of its features and one of its major features is immutability. This implies that without network consensus, it is nearly humanly impossible to modify information stored in the blockchain. There are several consensus protocols and the two most common ones are Proof-ofwork (PoW) and Proof-of-Stake (PoS). Bitcoin and Ethereum blockchain still make use of proof-of-work consensus, however, Ethereum is working toward using the Proof-ofStake consensus to help deal with its scalability issues. As I mentioned before, blockchain is simply a digital version of transactions that is not only duplicated but also distributed across hundreds or thousands of computer systems (also known as nodes) on the blockchain network. Blockchain technology is also a type of distributed ledger technology (DLT) that has several exciting features that make it extremely useful for different purposes. The illustration below shows the different properties of blockchain technology.
Figure 44: Blockchain properties (Euromoney Learning 2020)
The moment the details of a block are altered, it will be instantly obvious that someone or a group of persons had tampered with the records. For hackers to corrupt a blockchain network such as bitcoin's blockchain, they must change all the records in all
the blocks in the chain which is owned by different nodes on the network. This means that the hackers must be prepared to change the records of thousands of users on the network before they can successfully corrupt the network – this explains why it is nearly impossible to corrupt a blockchain network. Apart from the existing hype surrounding cryptocurrencies which has also been a source of distraction from the true potential of blockchain technology, it is already transforming different sectors. In fact, the technology has multiple applications in virtually all industries. Blockchain is now giving users control of their data with new serverless internet and decentralized web. Blockchain technology is revolutionizing the financial industry by eliminating the need for intermediaries in cross-border transfers and providing faster and convenient fund transfer platforms. With the technology, we can now easily track fraud in finance. It is also transforming the healthcare sector by providing solutions to data storage and how patients' data is being used. In fact, blockchain technology is providing better alternatives to most traditional platforms that we were accustomed to.
Why the Blockchain Hype? Well, previous efforts to create digital money did not work out because of one major issue - trust. How do we trust someone who created a new currency not to allocate millions of the currency to themselves or even steal some of it for themselves? There was also the issue of double-spend – a situation where someone spends the same money two or three times. When Nakamoto Satoshi came out with the bitcoin blockchain he designed the blockchain to solve not just the problem of double-spending but also that of trust. Someone is always in charge of traditional databases that most of us know like the SQL database and it is possible for them to change the entries and steal funds. But blockchain's decentralized structure ensures that no single person is in charge; instead, all users on the network are in charge. Blockchain Use Cases Cryptocurrencies have taken the financial industry by surprise and have been blockchain's most popular use case (also the most controversial). They are digital forms of money that anyone can use just like fiat currency. So, you can buy things online and even offline with cryptocurrencies like Litecoin, Bitcoin, Ethereum and several others. You can even pay for lunch, buy things from grocery stores, tickets and several others. Unlike fiat currency, there are no physical cryptocurrencies; instead, they are all in digital forms. Presently, we now have over 6,700 cryptocurrencies while bitcoin maintains majority of the market value of all digital currencies. Smart Contracts The emergence of digital currencies has impacted our financial world significantly, but another aspect of blockchain that has proven to be extremely useful is smart contracts. It is just like the regular contracts we all know, except that the rules are not paper-
based. Instead, they are enforced on a blockchain in real-time and this means that we no longer require the services of middlemen since smart contracts provided extra levels of accountability for all stakeholders in a manner that is not even possible for traditional agreements. Check out other striking blockchain use cases in the diagram below.
HOW CAN THE INTEGRATION OF ARTIFICIAL INTELLIGENCE AND BLOCKCHAIN INFLUENCE BUSINESSES? One thing that blockchain technology and artificial intelligence have in common is the level of hype surrounding them. Well, amid the hype lies the awesome potential to revolutionize the world. Let's go through some ways blockchain and AI integration can
be beneficial to businesses. Improved Security Of course, the encryption of the information stored in a blockchain ensures that it is safe. So, the ideal platform where you can store extremely sensitive personal and company data is a blockchain. On the other hand, what AI needs to function optimally is vast amounts of data. This also explains why experts are trying to develop algorithms that perform their duty with encrypted information without revealing its content. Several aspects of data processing often involve exposing unencrypted data and this is a security risk, but with this new solution, things will be much safer. At the base level, blockchain is undoubtedly secure, but its additional layers, as well as applications, are not as secure as the base level and this explains why there are several cases of breaches. This is where machine learning comes in as it can help enhance the deployment of blockchain apps and predict future possible system breaches. Untangling AI's Decision Making No matter how useful or helpful AI is, people will not fully embrace it if they do not trust it. In fact, the inability to fully explain decisions made by the computer is a major reason why the broader adoption of AI seems to be slow. The decisions that AIs make are often complex for humans to grasp and this is mainly because these AI systems can assess a large number of variables all separately in order to learn the ones that are crucial to the job it is attempting to do. More organizations are increasingly depending on AI algorithms to made decisions regarding the authenticity of financial transactions – the ones that should be investigated or blocked. The truth is that such decisions made by AI need to be audited by humans to ensure their accuracy. Well, this task can be an extremely complex one considering the huge amount of data that needs to be examined. Keeping records of the decisions made by AI algorithms on a blockchain will make it more straightforward to audit, especially with the increased confidence in the quality of the record since blockchain data cannot be easily tampered with. Regardless of the value AI is offering us in terms of its advantages, if the public fails to trust it, then there will be a limit to its usefulness. The use of blockchain and AI will ensure that decisions made by computers are more transparent. Blockchain has the capability to store all the decisions that AI systems make data point by data point and can also ensure that they are readily available for analysis. The Data Market This point share so much in common with enhanced security. Experts believe that new use cases will emerge considering the fact that distributed ledger technologies like blockchain can store large amounts of encrypted data which AI algorithms can effectively manage. It would be possible for you to store your personal data in the blockchain and later decide to monetize it. While big players in the data space such as
Amazon, Facebook, and Google have access to vast amounts of data which is extremely useful for AI processes, majority of us including businesses do not have access to all that information. But blockchain can open up the door for smaller firms and startups to challenge these big corporations because they can now access the same information as well as the same AI algorithms. Another significant benefit of the combination of AI and blockchain technology is that it can help improve how we work with data. Since AI requires less time to process data, it can cut down on the time required to verify transactions. Efficient Management of Blockchains Over the years, the development of new computers has continued to improve as they are increasingly getting faster; however, computers are stupid. A computer cannot get a task completed without receiving explicit instructions regarding how to perform such tasks. What this means is that due to their encrypted nature, we require vast amounts of computer processing power to operate blockchain data on computers. In blockchain, mining is required to verify transactions and also create new coins. But the process makes use of a “brute force” strategy which involves an attempt to combine several characters until the one that fits well is found before a transaction can be verified. The use of AI in such instances will be a drastic departure from the brute force approach as AI can manage such tasks more thoughtfully and intelligently. Have you wondered why hackers continue to improve their skills despite the level of security available? Well, what happens is that experts who are good at cracking codes will continue to improve their hacking skills as they crack more codes in the course of their careers. This also applies to a machine learning-powered mining solution – it will approach the task by getting smarter with each successful code that is cracked but will not need several decades to become an expert. Instead, machine learning algorithms can sharpen their skills instantaneously as long as they work with the right training data. Google has confirmed that machine learning can effectively resolve the issue of high energy requirements for data mining processes. It cut down the energy consumption needs for cooling their data centers by 40 percent. To achieve this goal, they trained the DeepMind AI on vast amounts of historical data obtained from their sensors located within a data center. This same approach will not only work for their data centers but will also work for mining and this will help cut down on the cost of mining hardware. Enhancing Smart Contract Considering the number of hacks that we have witnessed in the blockchain and cryptocurrency world, it is obvious that smart contracts are not really smart enough. Hackers are increasingly exploiting several technical flaws in the blockchain. The programming of smart contracts enables them to automatically release and transfer funds the moment specific conditions are satisfied. This is only possible when network consensus is reached but smart contract code is
public. So, anyone (including those with malicious intentions) can take all the time in the world to go through all the lines of code in a bid to identify possible flaws. But how can AI deal with this issue which has been on the increase, especially with the increase in the value of top digital currencies? Well, AI can actually help to verify smart contracts and even take it a step further by predicting vulnerabilities that hackers can possibly exploit.
AI and Digital Currencies In traditional financial markets, investors actually leverage machine learning and AI to analyze the markets – Oplum, Aidyia, Estimize, DataTrading, and EmmaAI. But some firms were forced to close due to certain reasons. However, similar tools are now being applied in crypto trading. One of the challenges of the cryptocurrency market is its high volatility but this issue also offers investors an opportunity to make more profits. Since the prices are prone to fluctuate all through the data, traders can actually earn a stable income by making accurate calculations. However, it is essential to process large volumes of information for traders to calculate the patterns of the cryptocurrency market and this is what AI can do. The key benefits of using AI in crypto trading include: High work speed Increased accuracy Ability to analyze volumes of data Learning ability Presently, AI is being used in managing billions of dollars worth of stocks, bonds and assets. Using AI and mathematical expectations in the cryptosphere is not yet popular, but it is increasingly being embraced by investors for three major purposes. First, investors are now using AI for forecasting the crypto market and a good example is the NeuroBot platform. It is a platform that makes predictions regarding the crypto market based on neural networks and not on user experience. It observes exchange fluctuations, makes comparisons and then predicts the changes that may occur the following day. The developers of the platform claim it has an accuracy of 90 percent and they hope to incorporate technical and fundamental analysis. Also, investors can now make market sentiment analysis with AI and machine learning because this process requires vast amounts of different datasets – from forums, articles and blogs.
SECTION IV: THE FUTURE OF AI & HOW TO GET INVOLVED
Chapter 16: AI Challenges & Ethical Concerns
Key Takeaway Some experts in the tech world are of the view that AI, could eventually turn out to be the worst event that our civilization has ever witnessed. We are still in the early stages of this awesome technology, but the truth is that unease abounds on several fronts. The question is no longer whether AI will replace certain jobs presently being performed by humans; instead, the focus is now “to what degree” will AI replace these jobs. Criminals may train these machines to hack businesses and governments. Once a major military power proceeds to develop AI weapons, then we will undoubtedly witness a global arms race. The most commonly accepted strategy for mitigating the risks associated with AI is to have some kind of regulation in place.
Sometime in March 2021, Elon Musk, the famous founder of SpaceX and Tesla gave a friendly warning; “Mark my words, AI is far more dangerous than nukes.” Although he issued this warning during the Southwest conference in Austin, Texas, this is just one out of many of such warnings coming from Musk regarding AI. Consider what he told his SXSW audience; “I am really quite close... to the cutting edge in AI and it scares the hell out of me. It's capable of vastly more than almost anyone knows and the rate of improvement is exponential.” Many experts in the field see Musk's statements as very skeptical, but he is not really alone as some others share his opinion. The late Stephen Hawking informed his audience in Portugal that the impact of AI could actually be cataclysmic unless measures are taken to strictly and ethically control its rapid development. Unless we make efforts and learn how to prepare for and prevent the potential risks associated with AI, it could eventually turn out to be the worst event that our civilization has ever witnessed. Well, one more individual in the tech space who shares a similar view with Musk is research fellow Stuart Armstrong. Armstrong referred to AI as an “extinction risk” if it goes rogue. He added that nuclear war is just on an entirely different level in terms of destruction since it would only destroy a relatively small portion of our planet and the same applies to pandemics even at their most virulent state. So, how exactly would AI come to such a state? In a 2013 New Yorker essay, author and cognitive scientist, Gary Marcus gave insights into what may happen. He disclosed that the moment computers have the ability to effectively program themselves and further improve themselves, then this will result in technology singularity (more on this later) or what is regarded as intelligence explosion – the risk of machines outwitting humans in the fight for resources and at this point, we cannot easily dismiss self-preservation. So, here is the big question which I know you might have heard or asked before; is AI a threat?
IS AI A THREAT? While AI continues to get more sophisticated and increase in popularity, so also are the voices warning against the current and future risks associated with it. From the increasing level of automation of some tasks to racial and gender bias issues as a result of the nature of our existing data – some of which are already outdated. Also, there is the risk of autonomous weapons that are capable of functioning without human interference. While we are still in the early stages of this awesome technology, the truth is that unease abounds on several fronts. For several years, the tech community has been engaged in a hot debate regarding the potential threats posed by AI. One of such categories of risk includes destructive superintelligence commonly regarded as artificial
general intelligence which we earlier discussed in chapter one. There are concerns that when humans create this class of machines, they will escape our control and cause disaster.
Job Automation This is perhaps the most immediate concern as agreed by many stakeholders. Presently, the question is no longer whether AI will replace certain jobs currently being performed by humans; instead, the focus is now “to what degree” will AI replace these jobs? For instance, in industries where employees execute repetitive and predictable tasks, disruption is already taking place, especially with the impact of COVID-19. A Brookings Institution study revealed that 36 million individuals actually perform tasks that have “high exposure” to automation. What this implies is that within a short time, about 70 percent of these jobs – from market analysis to retail sales, warehouse labor and hospitality – will be executed using AI. In fact, a more recent report from Brookings concluded that even white-collar jobs are not entirely free as they may even be more at risk. Based on a McKinsey & Company report, those who will be affected the most in the United States are African American workers. Experts agree that AI will create new jobs, but an unspecified number of the jobs to be created remain undefined. Many of the new jobs that will be created will not even be accessible to members of the displaced workforce that are not “educationally advanced.” So, ask yourself this question right now, will the new jobs that will be created due to AI be a good match for you if all you can do now is flip burgers at McDonald's? I do not think so! The kind of jobs that will be created are jobs that demand lots of training, education or may even demand some high level of skills or creativity that most displaced employees may not have. These are some of the skills that computers are unable to perform for now. I have devoted the whole of chapter 18 to share different jobs that are at risk as well as the ones that AI systems cannot perform in the nearest future. Jobs that need graduate degrees, as well as extra post-college training, are not also free from AI displacement. Some experts strongly believe that such professions will eventually be decimated. Already, AI is increasingly having a remarkable impact on medicine and the healthcare industry is in for a massive shakeup. Other sectors like accounting and law will soon witness its impact. It is always a big challenge for humans to go through a lot of information with thousands of pages. It is often easy to omit some things, but this is not the case with AI. It can scan through thousands and millions of documents and comprehensively deliver the best contract for the outcome that someone is attempting to achieve – this will eventually replace many corporate attorneys. With the combination of AI and blockchain technology (check out chapter 15 for more on this) our data will be properly and safely stored digitally while AI goes through it within a short time to make automatic decisions based on its interpretations. This also means that accountants may no longer be
needed at some point.
Security, Privacy and Deepfakes We just discussed the most pressing issue that may arise due to AI disruption – job loss. But job loss is just one out of several potential risks. In 2018, a group of 26 researchers from 14 institutions (industry, civil and academic) listed several dangers that may negatively affect humans or at least lead to chaos within the next five years. The paper titled “The malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” talked about various ways AI could also affect us. In the paper, they stated that AI has the potential to threaten digital security in different ways: Criminals may train these machines to hack businesses and governments. Malicious actors may socially engineer victims at both human and superhuman levels of performance. Malicious use of AI could also threaten physical security in a number of ways. It could be used for privacy-eliminating surveillance, profiling and repression. It could also be used as an automated tool for targeted disinformation campaigns. Have you heard of China's “Orwellian” use of facial recognition technology in schools, offices as well as other locations? Well, that is just a country and there are hundreds of firms that actually specialize in such kind of technology and they are already selling it in different cities around the globe. Well, we can only guess whether this tech will eventually become normalized. Just like the internet (where most users surrender their digital data just to enjoy a little convenience) will the emergence of 24/7 AI-analyzed monitoring turn out to be a fair trade-off for our increased and enhanced security and safety regardless of the fact that bad actors will exploit it? Of course, authoritarian regimes will certainly make use of this tech. But the big question is, in what ways will it invade democracies and what kind of constraints can we place on this technology? This is one question that policymakers around the world would need to answer and they need to do so as quickly as possible. It is possible for AI to also create hyper-real appearing social media “personalities” that we may find extremely hard to differentiate from the real ones. Such personalities may eventually influence elections when deployed cheaply and at scale on various social media platforms such as Instagram, Facebook and Twitter. What about audio and video “deep fakes?” By manipulating likeness and voices, people can now create deep fakes. While deep fakes relating to people's videos are already getting popular, that of voices are not very common and when it eventually becomes common, it will be immensely troublesome. What this means is that it would be possible to manipulate an audio clip of any politician to make it appear as if the individual made sexist or racist remarks even when they
never uttered something of that nature. Malicious actors can actually achieve this by using machine learning. Such clips can end up fooling people and may not easily be detected – totally derailing a political campaign. All that is required for this to happen is just one successful attempt. Once this is achieved, then we may never be able to ascertain what is real and what is fake. What this means is that we may get to the point where we can never believe what we see or hear – historically, these are the two sources of evidence that we have always depended on. This is indeed going to be a major challenge when it eventually happens. While lawmakers, as well as policymakers around the world, are getting aware of such trends, they are now making efforts to proffer solutions, but they need to act fast before such trends become a reality.
AI Bias and Increased Socia-Economic Inequality Another major concern that people have about AI has to do with the increasing socioeconomic inequality caused by AI-driven job loss. Just as it has been with education, work has also been one of the drivers of social mobility which implies that as people get out of a job, they end up getting another one. But this may not be the case for all kinds of work. For instance, research results suggest that when individuals who are engaged in repetitive and predictable kinds of jobs that are susceptible to AI-takeover are left without a job, they are not always eager to get retrained. Interestingly, this is not the case for individuals in higher positions since they have the financial resources needed to get retrained. Also, other kinds of AI bias have detrimental effects that are beyond race and gender. Apart from data and algorithmic bias, we all know that humans are responsible for developing AI and as you know, humans are inherently biased. Majority of AI researchers are male and they are from specific racial demographics. These individuals were mostly raised in high socioeconomic locations and they do not have any disabilities. What this implies is that it is quite challenging for them to have a broader view regarding world issues. According to Timnit Gebru, a Google Researcher, the root of bias is not technological, but social and scientists just like herself are among the most dangerous individuals in existence since they generally have this illusion of objectivity. She added that since majority of the radical changes we have witnessed before happen at the social level, the scientific field must endeavor to understand the world's social dynamics. Apart from journalists, political figures and technologists, another individual that has voiced his concern regarding AI's potential socioeconomic challenges is Pope Francis. Pope Francis also warned that AI is capable of circulating tendentious opinions as well as false data that may end up poisoning public debates and possibly manipulate people's opinions to the extent of harming the same institutions that ensure peaceful
civil coexistence. Perhaps the greatest challenge that stakeholders agree on is the nature of the pursuit of the private sector which is focused on profit above every other thing. Of course, that is precisely what they are supposed to do.
Autonomous Weapons and Possible AI Arms Race Despite the fact that most people do not share Musk's view about how dangerous AI is, one question that comes to mind is; what would happen when AI decides to launch biological weapons or nukes without human control? What happens when an enemy chooses to manipulate data and successfully return AI-guided missiles to where they were originally launched from? Interestingly, both scenarios are very possible and of course, they will result in disastrous outcomes. Well, those who think so include over 30,000 AI/robotics researchers as well as other individuals who signed an open letter related to the subject back in 2015. On the part of these individuals, the main question facing us presently is whether we should start a world AI arms race or completely stay far away from starting in the first place. The truth is that once a major military power proceeds to develop AI weapons, then we will undoubtedly witness a global arms race. I believe you already know what the outcome would look like – autonomous weapons that will transform into future Kalashnikovs. One of the striking things about autonomous weapons is that they do not need raw materials that are costly and difficult to acquire like nuclear weapons. This means that they will become very cheap and military powers can mass-produce them. Within a short time, they will also find their way into the black market and finally, terrorists, as well as dictators, will lay their hands on such weapons. AI arms are ideal for destabilizing nations, assassinations, ethnic cleansing, and subduing populations. One thing experts generally agree on is that AI arms race will not in any way benefit humanity. But I must also point out that AI can make battlefields much safer for civilians without necessarily creating dangerous tools for killing human beings.
Algorithmic High-Frequency Trading and Stock Market Instability Can algorithms bring down an entire financial system? Of course, it is possible for algorithms to bring down Wall Street and many believe that the next major financial crisis in the markets may be caused by algorithmic trading. If you have not heard of “algorithmic trading” before, I will explain it now. It is another form of trading that takes place when a computer that is not in any way affected by emotions or instincts (that are likely to obscure a person's judgment) trades strictly based on pre-programmed instructions. Computers that have this capability can actually execute extremely high-frequency,
high-volume and high-value trades which may result in extreme volatility in the markets and big losses. This type of trading is increasingly proving to be a significant risk factor in global markets. When a computer executes thousands of trades at an extremely fast pace with the intention of immediately selling within a few seconds just to make little profits, then it is High-Frequency Trading. One major challenge with this type of trading is that it fails to consider the fact that markets are all connected and our logic and emotions will always play a remarkable role in global markets. Have you ever thought of how the selling of millions of shares of a particular company like the airline market will affect the entire market? Of course, people will panic and begin to dispose of their shares in a related industry – the hotel industry. This will in turn affect other related sectors like travel-related companies as people will panic and sell off such shares. Next, logistics, then food supply chain companies will follow and this will continue until we experience a crash in the stock market.
HOW TO MITIGATE THE RISKS OF ARTIFICIAL INTELLIGENCE Generally, the most commonly accepted strategy for mitigating the risks associated with AI is to have some kind of regulation in place. This is also in line with some major players in the field like Elon Musk. He believes that there is not just a need for regulation but one that clearly understands the dangers posed by the wrong use of AI. Musk believes that the regulation should comprise of a public body that has insight as well as oversight to ensure and confirm that all stakeholders are developing AI safely. When we consider the numerous benefits of AI, then you would agree with me that what we need is the regulation of AI implementation, but this should not in any way affect research in this field. According to a popular futurist, Martin Ford; “You regulate the way AI is used… but you don’t hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous.” The reason is obvious; any country that fails to continue its AI research and development will lag behind others economically, militarily and socially. Therefore, one awesome solution proffered by Ford is to engage in the selective application of AI. We need to decide areas where we want AI research and development as well as areas where we do not want it. We should be clear on where it is acceptable and where it is not. An international conversation is extremely crucial when it comes to AI regulation. This will create the right forum where all stakeholders can discuss ways to best control it and this could be in the form of a treaty that completely outlaws AI weapons or allows it for specific applications. Historically, technology has helped us with complex tasks but as humans, we have never considered the fact that someday, machines may turn out to be smarter than humans or that they may become conscious. At this point, we need to honor humanity and define how AI can enhance our existence and not threaten it.
Chapter 17: AI & The Future: The Impact of AI on our World “[AI] is going to change the world more than anything in the history of mankind. More than electricity.” — AI oracle and venture capitalist Dr. Kai-Fu Lee, 2018
Key Takeaway Learning new skills will ensure that you do not get dropped by the wayside as things increasingly change. Those who will be seriously affected by the impact of AI on jobs are the bottom 50 percent of the world in terms of education or income. One of the likely breakthroughs we are going to see is the ability to clearly grasp the content of language to enable us translate between languages using machines. Drug discovery is now streamlined and faster; virtual nursing assistants can now monitor patients and a more personalized patient experience is possible courtesy of big data analysis. AI and especially, “Narrow AI,” has affected virtually all the major industries that carry out objective functions with trained models.
One of the questions that most people have regarding AI is how it will affect our world in the future – for better or worse. This chapter will examine several ways our future will be shaped by AI systems. Presently, there are many AI innovators in the artificial intelligence space which is obviously getting hotter than ever with each passing day. Did you know that out of 9,100 patents that IBM inventors received in 2018, 18 percent of them (1,600) were AI-related? The AI field is indeed getting more attention even as Elon Musk donated $10 million to finance current research taking place at OpenAI, a nonprofit research firm. Interestingly, while addressing school children, Vladimir Putin, the popular Russian president told them that “Whoever becomes the leader in this sphere (AI) will become the ruler of the world.” It is clear that AI has eventually taken the center stage after over seven decades of dormancy during a multi-wave evolutionary era that started with “knowledge engineering” and moved on to model- and algorithm-based machine learning. The truth is that AI will not be leaving the center stage soon as we are about to enter an era where AI will transform our lives tremendously. Did you know that modern AI and especially, “Narrow AI,” has affected virtually all the major industries that carry out objective functions with trained models? This is very correct, especially as data collection and analysis witnessed a boost in the past few years due to factors such as the proliferation of connected devices, robust IoT connectivity and the ever-increasing speed of the computer. Of course, the adoption of AI by different industries is not uniform as some are ahead while others are just beginning their AI journey. Regardless of which side you fall into, one thing is sure, AI is having a remarkable impact on our lives and we can no longer ignore it. Someday, autonomous cars will take us from one location to another even though this will happen in about a decade or more. In manufacturing, AI-powered robots now work with humans and execute a limited range of jobs such as stacking and assembly. The use of predictive analysis sensors also ensure that the equipment functions hitch-free. Healthcare is also experiencing the impact of AI because diseases are now more accurately and quickly diagnosed courtesy of AI. Drug discovery is now streamlined and faster; virtual nursing assistants can now monitor patients and a more personalized patient experience is possible courtesy of big data analysis. Education also is feeling the impact of AI as textbooks are getting digitized with the help of AI. Also, human instructors are now assisted by early-stage virtual tutors. Student's emotions are now being ascertained by facial analysis to help determine those who are bored or struggling and better design the experience to suit their specific needs. What about media? Of course, journalism is taking advantage of the benefits AI has to offer and will continue to benefit from it. For instance, Bloomberg now makes quick sense of complex financial reports by using Cyborg technology. The Associated Press is also harnessing AI as it uses
Automated Insights' natural language abilities to produce 3,700 earnings reports stories annually – this is about four times more than what they have achieved previously. In customer service, Google is working toward creating an AI assistant that is capable of making human-like calls. The AI assistant can make appointments and the system can grasps context and nuances. Interestingly, all these advances and several others we have already talked about as well as many more that we are going to see in the future are just the beginning – we have more coming our way. We are eventually going to witness innovations that most people cannot even fathom. Those who feel that the capabilities of AI software will soon hit its peak are making a big mistake for several reasons because greater things are going to happen considering the level of investments being made in AI research. Consider the following: Companies are investing about $20 billion collective dollars on various AI products and services each year. Universities are now ensuring that AI becomes a more prominent aspect of their different curricula. Microsoft, Amazon, Google and Apple are all pumping billions to create different AI-powered products and services. MIT is releasing $1 billion to set up a new college that is devoted primarily to computing and focusing more on AI. The US Department of Defense is not left out as they are already upping their AI game too. The truth is that big things are undoubtedly bound to happen in AI. A good number of these developments are in the course materializing fully while others are still theoretical and may remain so, but they are all disruptive. The AI field witnessed several winter seasons before and this is similar to what other industries go through. According to Baidu's chief scientist and former Google Brain leader, Andrew NG; “Lots of industries go through this pattern of winter, winter, and then an eternal spring... We may be in the eternal spring of AI.” So, what are the possible ways AI will impact our world in the future? Well, before I get into the real answer, I must add here that the purpose of this section is to guide you as you strategically position yourself and benefit from the impact of AI on various aspects of our lives – business, office, manufacturing, healthcare, etc.
THE FUTURE IMPACT OF “NARROW AI” ON THE WORKFORCE The first question you should ask yourself right now is; “how routine is your task in your workplace?” This was the question that Kai-Fu Lee, an AI guru asked in the course of a lecture at Northwestern University where he focused on AI technology and its future
impact as well as limitations and side effects. When it comes to the side effects of AI, Lee warned that those who will be seriously affected by the impact of AI on jobs are the bottom 50 percent of the world in terms of education or income. This also leads to the simple question, “how routine is your job?” Well, the truth is that the more routine a job is, the more likely it is that it will be replaced by AI. This is because within a routine task, AI can actually learn to optimize itself. The more quantitative and objective a job is – jobs like picking fruits, separating things into bins, answering customer service calls and washing dishes are highly scripted tasks and they are routine and repetitive. Within the next five, 10, or even 15 years, AI will most likely take over such jobs and displace employees. Presently, humans are still performing picking and packing functions in the warehouses of AI powerhouse Amazon despite the hundreds of thousands of robots but that will certainly change. Other experts in the field have also supported Lee's opinion and one of them is Mohit Joshi, Infosys president. During the Davos gathering in 2021, he informed the New York Times that; “People are looking to achieve very big numbers. Earlier they had incremental, 5 to 10 percent goals in reducing their workforce. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’” I hope you understand what it means for companies to reduce their workforce and handle their jobs with just 1 percent of their employees. If you happen to fall within this category, then there is an excellent solution for you – RETRAIN. Why Retrain? Well, experts in the field have pointed out that AI is not actually useful in two ways. AI lacks capacity for compassion or love. It lacks creativity. AI is just a tool that can significantly amplify human creativity. So, if your job falls within the routine jobs I earlier mentioned – repetitive or routine jobs, then you must learn NEW SKILLS. By learning new skills, you will not be dropped by the wayside as things increasingly change. In fact, Amazon is already offering its employees funds to get trained for tasks they can do in other organizations. Retraining people for new jobs is believed to be one of the major prerequisites for AI to be successful in several ways and this requires that we invest significantly in education. Unfortunately, this is not happening often and widely enough. Some experts have added that we need to start learning programming as if we are learning a new language and we have to even do that as soon as possible because it is indeed the future. Things will only get more difficult for those who do not know coding and programming. Many employees who will be displaced in the future by technology will eventually find new jobs, but it will take some time. Something similar happened during the industrial revolution when America moved from agriculture to an industrial economy which eventually played a major role in the Great Depression. Those who were affected later got back on their feet but the whole event had a massive short-term impact on families.
Punishment and Reward Some stakeholders believe that there will be two aspects of AI research and experiments in the future. The first is known as reinforcement learning which focuses on rewards and punishment instead of labeled data. The other one is generative adversarial networks (or GAN) and this allows computer algorithms create. AI is actually in a position to significantly help in dealing with issues related to sustainability, environmental issues and even climate change. It can achieve these goals partly through the use of sophisticated sensors which will end up making cities less polluted, less congested and more livable. Progress in this direction is already being made because the moment a prediction is made, then rules and policies will be made accordingly. We may likely see a future where cars will have sensors that provide information regarding traffic conditions which would even make forecasts on possible problems and enhance the flow of vehicles on the roads. This is not yet a reality – still in its infancy. However, such developments will play a crucial role in our lives in the future. AI and the Future of Human Rights and Privacy We have already discovered that AI's dependence on big data is already having a major impact on privacy. From Amazon's Alexa eavesdropping to Facebook's privacy controversy – there are many cases of tech gone wild. But this trend may likely get worse if the appropriate regulations and limitations are not established. Back in 2015, Tim Cook, Apple CEO scoffed at competitors Facebook and Google for what he called “greed-driven” data mining. In his words: “They’re gobbling up everything they can learn about you and trying to monetize it. We think that’s wrong.” In another forum, Cook described the act of collecting huge personal profiles as simply laziness and not efficiency. He believes that AI must respect human values (which includes privacy) before it can truly be smart. Failure to get this aspect right will result in terrible outcomes. If AI is implemented responsibly, it has the potential to make society better. But just like other emerging technology, there is also the risk of AI having a detrimental impact on human rights as a result of its commercial and state use. The application of these technologies often depends on the generation, collection, processing and sharing of vast amounts of data regarding collective and individual behavior. Individuals can actually be profiled with this data to predict their future behavior. Of course, some of the use cases are extremely important; however, others may have unprecedented threats to people's privacy as well as the right to freedom of information and freedom of expression. Other rights that may be affected by the use of AI include the right to freedom from discrimination, the right to a fair trial, and also the right to an effective remedy.
AI: Helpful or Homicidal?
Hollywood's representation of possible future AI innovations has undoubtedly drawn the attention of many, especially as it appears apocalyptic and sometimes scary. One aspect of AI that has for a long time been straw for fantasy is regarded as artificial general intelligence or what Stuart Russell, called “human-level AI.” However, I must also point out that the possibility of it materializing in our lifetime is quite slim. In Russel’s opinion (which many AI experts agree with), the machines will most certainly not rise in the lifetime of all those reading this book. But this does not deny the fact that we will soon begin to witness landmark breakthroughs even before attaining something close to human-level AI. One of the likely breakthroughs we are going to see is the ability to clearly grasp the content of language to enable us translate between languages using machines. When humans are able to carry out machine translation, then they also grasp the content and express it. Presently, machines lack what it takes to clearly understand the content of language. Once humans get to the point where systems can read and understand the content of language clearly, then we will have systems that are capable of reading and understanding all that the entire human race has ever written. As you know, no individual living or dead has ever achieved this goal and it is humanly impossible. But when we have that capability, then it would be possible to query all human knowledge in existence and the AI system would have the capability to synthesize and integrate and provide answers to questions that no individual has ever answered before. No human has been able to read all that has ever been written and therefore cannot answer all questions. But AI systems that attain this point can put together all the dots between things that have been separate throughout history. Another reason why AGI remains hypothetical in the future is based on the fact that it is extremely hard to emulate the human brain. Researchers in this field have carried out research in a bid to build what is known as cognitive architecture which they believe is innate to an intelligent system. They have discovered that our brain is more than a homogenous set of neurons. It comprises different components and some of these components are linked with knowledge regarding how humans carry out their duties – procedural memory. We also have knowledge based on personal facts or previous experiences also known as episodic memory. Humans also have semantic memory or knowledge based on general facts. As researchers carry out experiments based on what is already known about our brain, they come to the conclusion that although they have made progress so far, it is still slow and a difficult process. So, this leads to the question which many have asked before.
Is Humanity Threatened by AGI? Although we have earlier discussed the challenges and concerns of AI, this is one question that has been asked repeatedly by many stakeholders in this space. So, I would attempt to provide an answer to this question in a balanced way. The truth is that some leading experts in the AI space really subscribe to the fact that AGI may pose an
existential threat to humanity. They subscribe to a possible nightmare scenario that has to do with what is called “singularity.” It is a situation where super-intelligent machines take charge of affairs here on earth and permanently modify our existence via eradication or enslavement. As postulated by the late physicist, Stephen Hawkin, if AI itself starts to create or design AI that is much better than what human programmers can ever achieve, then this could lead to the creation of machines that are more intelligent than humans as our intelligence exceeds that of snails. Another popular individual in the tech space who shares a similar view is Elon Musk. He has for several years warned that the greatest existential threat to humanity is AGI. In his opinion, attempts to make AGI a reality is similar to “summoning the demon.” Musk further revealed his concern that Larry Page, Google co-founder and Alphabet CEO could mistakenly shepherd something “evil” into existence even though he has the best intentions. It could be a "fleet of AI-enhanced robots that have what it takes to wipe out mankind from this planet. Well, if you know musk, then you would agree with me that he is popular for his dramatic behavior. Others agree that at some point, AI systems will no longer need humans to train them; instead, they will gradually learn and evolve by themselves. But they also believe that based on the methods being used, it would not lead to the creation of machines that will decide to kill humans. However, many still agree that they will have to reevaluate their opinion regarding the AI space in the next five to ten years since new and improved methods will be available then. Well, Oxford University's Future of Humanity Institute came out with the outcome of an AI survey. The survey which contains estimates from 352 machine learning researchers regarding the evolution of AI several years ahead is titled “When Will AI Exceed Human Performance?” It is obvious that many optimists were part of the respondents. Here is a breakdown of the results:
So, at this point, what are humans expected to do? Well, humans will undoubtedly be resting in umbrellas, sipping drinks served by droids. For the skeptics, they consider the fact that computers can currently handle slightly over 10,000 words which involves just a few million neurons. On the part of the human brain, there are billions of neurons and all are connected in an extremely complex and intriguing way. But the existing technology involves straightforward connections that adhere to easy patterns. It is most unlikely that humans will develop AI systems that will increase from a few million neurons to billions using the current software and hardware technologies.
What About Killer Cyborgs? Some people also have the fear of AI systems turning into killer cyborgs just as we have seen in some sci-fiction movies before. Well, the issue is not really with machines transforming into war robots; instead, the real threat has to do with competence – AI being able to achieve its objectives that are in perfect alignment with ours. It is most unlikely that AI systems will suddenly wake up one day and embark on a mission to rid this planet of humans. That is merely science fiction and not how things may eventually play out. It is not evil AI that most of us should be worried about but the use of AI by malicious actors for negative activities like credit card fraud, bank robbery, assassination, etc. Indeed, the pace at which AI development is going may turn out to be a blessing as it will give us sufficient time to understand how we can incorporate it into society. One
more point to add here is that technological breakthrough is often difficult to predict. Regardless of the time and age, we will eventually get to the point where these conceptual breakthroughs will take place, one thing is sure – we will get there someday. Our focus should not just be on if we will ever get there or when we are going to experience such a breakthrough; instead, it should be on our level of preparation. This is the perfect time to start discussing issues related to the ethical use of AGI and if it should be regulated. This is the time to put in place measures that will help eliminate data bias which may corrupt algorithms and lead to a disastrous outcome as we earlier discussed. This also implies that this is the time for all stakeholders to come up with security measures that can control the activities of AI systems. We must also be humble enough to admit that just because we can do something does not necessarily mean we should. Although the situation of things with technology right now seems complicated, the big picture is straightforward. Most researchers believe that AGI will emerge within several decades but if we somehow bump into it anytime soon and unprepared, then it may turn out to be the greatest mistake humans have ever made. Such unprepared breakthrough in AGI development may result in a brutal global dictatorship, unprecedented suffering, inequality, surveillance and may turn out to be the extinction of human beings. On the other hand, if we carefully steer through the development of AI systems, then it will certainly result in a fantastic future where the rich gets richer, the poor gets richer and we become free to live out the life we have always desired.
Chapter 18: Jobs that AI can Perform better than Humans & Vice Versa Key Takeaway Humans are better when it comes to dealing with ambiguity and gray areas because AI systems lack knowledge of context or nuance and this makes them poor at making judgment calls. With the advanced data analysis, AI systems can highlight other products that customers may eventually show interest in later. There is a 98 percent chance that the transportation industry will become fully automated because self-driving cars are already being manufactured. Currently, robots are increasingly being used in military operations for different tasks like intelligence and surveillance. Our creativity as humans is limitless and this is one reason why AI cannot take over some jobs. AI systems may not be able to understand how the human mind functions; they are incapable of expressing their feelings or being compassionate.
When it comes to solving problems that have to do with handling vast amounts of organized datasets and repetitive tasks, algorithms will undoubtedly do better than humans. We are easily bored and often distracted while engaging in a single task repeatedly but AI systems do not really care how many times they get to handle a task. When handling tasks that involve processing and evaluating patterns in big data, we are often slow and prone to error and AI systems have for a long time done better than humans in this aspect. This also explains why in 1997, Deep Blue was able to defeat Garry Kasparov in chess. On the other hand, humans are better when it comes to dealing with ambiguity and gray areas. This is because AI systems lack knowledge of context or nuance and this makes them poor at making judgment calls. Interestingly, most tech giants working on AI solutions recruit humans to help them with tasks such as sorting, organizing, cleansing and preparing the data that algorithms will analyze and this is simply because we are better at performing this task. We are eventually going to see more opportunities for individuals with critical thinking skills as the global economy continues to get digitized and automated. Of course, AI and robots may not be able to build skyscrapers or fix plumbing issues for now, but they will assist the workers in these fields with the right data required to execute their jobs more safely, efficiently and faster.
JOBS THAT WILL BE TAKEN OVER BY AI AND ROBOTS IN THE FUTURE This section provides helpful information for employees to navigate future labor markets. We have already examined how businesses can take advantage of AI and machine learning. Considering all that we have discussed so far, these are some of the tasks that AI machines are likely going to take over from human beings in the future. As an employee, if your job is among the ones listed below, then the smartest thing to do is to go through the next section on jobs humans will do better than AI. Find out the ones you can learn and upgrade your skills to ensure that you will not be affected in the future. 1. Customer Service Executives Executing this kind of job does not really require a high level of emotional and social intelligence. Already, more organizations are depending on AI to help provide answers to customer support questions as well as FAQs. Also, chatbots are increasingly playing a remarkable role in customer interaction in addition to providing support for internal queries. 2. Data Entry/Bookkeeping It is most likely that you may not recall the last time you heard of bookkeeping as a profession. AI and machine learning are great at performing bookkeeping tasks since they are repetitive and involve lots of data.
3. Receptionists Most hotels – both large and small – now have auto check-ins and this also reduces the need for receptionists. Presently, you can actually place orders via tabs or communication screens. With the advancement of AI and machine learning, we may eventually see algorithms taking orders and performing related duties. 4. Proofreading In terms of comprehension, tonality and several others, editing is usually more complex, but proofreading is simpler. We already have applications that can detect sentence construction, grammatical mistakes as well as other errors and an excellent example is Grammarly. So, proofreaders may also lose their jobs in the future as these applications continue to improve the quality of their service. 5. Manufacturing and Pharmaceuticals Perhaps the two areas where people are most afraid that AI will take over tasks are the manufacturing and pharmaceuticals sectors. Already, we are witnessing the mechanization of the production process for a good number of products and AI is already handling the operational aspect of manufacturing. Also, robots can work alongside scientists in pharmaceutical labs to provide an environment that is much safer. This also implies that the era of scientists putting their lives at risk will soon be over. 6. Retail Services In retail services, automated services are now taking over sales function as merchants are now leveraging AI to help with payments options and self-ordering. In many shopping conglomerates, robots are increasingly replacing retailer jobs as they try to understand customers' patterns. With the advanced data analysis, AI systems can highlight other products that customers may eventually show interest in later. 7. Doctors As I mentioned before, robots now assist doctors in performing critical operations. We will soon get to the point where robot surgeons completely replace humans. It is only a matter of time before robotic doctors begin to render more effective and accurate treatments for patients than their human counterparts. In fact, the chances of infection will be less for two reasons; there will be no room for human error and robot surgeons will adopt more sterile measures. 8. Courier Services Several social and economic changes have been introduced to the delivery industry by AI. The technology has greatly streamlined several logistics and supply chain functions. We already have robots and drones handing courier services. Apart from the manufacturing industry which will immensely benefit from AI, another sector that will
enjoy tremendous benefits when robotic automation booms in the future is the transport sector. 9. Taxi and Bus Drivers Interestingly, there is a 98 percent chance that this industry will become fully automated because self-driving cars are already being manufactured as we earlier discussed in chapter FIVE. The takeover of autonomous vehicles will not really be long as many companies are currently test-running and fine-tuning the services of autonomous vehicles. The Los Angeles Times revealed that within the next ten years, self-driving trucks are likely going to replace 1.7 million American truckers. 10. Security Guards Over the years, AI has made remarkable advancements in the aspect of physical security. One of such AI-powered solutions is Yelp's security robot which is capable of inspecting a building using its high-definition camera. Also, the robot can easily detect any suspicious activity with its infrared sensor and directional mic There is also an 84 percent chance that in the future, AI will fully automate this space. 11. Soldiers The future battlefields will comprise mainly of robots that adhere strictly to orders without frequent human intervention or supervision. Currently, robots are increasingly being used in military operations for different tasks like intelligence and surveillance. The head of the UK military disclosed that by 2030, autonomous robots may comprise a quarter of the British army. Well, let’s hope that this does not mean the beginning of an AI arm’s race.
AI CANNOT REPLACE THESE JOBS SOON
Our creativity as humans is limitless and this is one reason why AI cannot take over some jobs. From thought leadership to empathy, emotional intelligence to strategic thinking and also negotiation – these are some of the qualities in jobs that are not possible for AI to replace. So, what are the professions that will remain on the exclusive list of jobs that humans can handle? 1. The Writing Profession Writers often need to ideate before they can create original and well-written content. It is currently impossible for machines to personalize just as humans do because of the uniqueness of every piece of writing. Machines are also incapable of relating to other humans as humans do. AI may not be able to replace the empathy and creativity of writers soon. Currently, it may be possible for AI to do some writing with preprogramming; but when it comes to the creative side, robots will not be taking over the human creative side. 2. Chief Executives
Part of the job of chief executives is to examine a broad spectrum of operations. To ensure that they achieve the goals of the organization, they need to motivate and even inspire the employees working for them. Leaders are endowed with a variety of leadership skills and when it comes to being the best leader, there is really no single formula for achieving it. In fact, no fundamental algorithm is capable of teaching machines to perform this task. Presently, robots lack the ability to understand the feelings of people or their state of mind and this also implies that they cannot replace this particular profession anytime soon. 3. Human Resources Managers If organizations must effectively manage interpersonal conflict, they require the human resources department. As humans, we have non-cognitive and reasoning skills. Unfortunately, robots are unable to understand human emotions for now and this implies that this particular profession is not going to be replaced soon by robots. In fact, it is expected that this sector will grow by as much as 9 percent by 2024 and the possibility that this profession will be taken over by robots is 0.55 percent. 4. Lawyers In terms of processing evidence and searching through vast amount of databases to inform decision making, AI solutions can handle it. But AI systems may not be capable of identifying the exact point where they will hit their opponent in the heat of an argument – at least in the next few decades. This is because lawyers need to bend the rules to favor their clients. Unfortunately, robots do not have the emotional intelligence to persuade people for now and not so soon too. Presently and within the next few decades, they cannot still reason with humans. 5. Psychiatrists Despite their level of experience and education, the most educated and skilled doctors and scientists have not been able to completely understand the way our brain is wired. There is a great need for compassion and empathy for us to connect to other persons and this is really something that robots are unable to do too. AI systems may not be able to understand how the human mind functions; they are incapable of expressing their feelings or being compassionate. So, in the foreseeable future, an AI-enabled psychiatrist is still not possible. 6. Clergyman One of the qualities that a clergy needs to function properly is guiding and preaching to their audience. Clergies are expected to have faith that can inspire others, empathy and emotion. Presently and in the foreseeable future, humans may not be able to educate AI systems to instill confidence in humans. Well, having an AI-powered priest would be something extremely amusing if it eventually happens in the future. 7. Public Relations Manager
In order to create buzz for their customers, public relations managers greatly depend on a network of relationships as well as contacts. PR managers need to create awareness regarding any given issue and also raise the needed funds. For PR managers to accomplish this goal, they are expected to have a connection with others to ensure that they can get involved in the campaign. This specific requirement is what makes this profession a safe one for humans in the future. 8. Graphics Designers There is the technical and artistic aspect of graphic design and to clearly grasp what clients truly want, there is a need for an immense sense of understanding. Remember, the requirements of each consumer are unique and this is what machines may not be able to achieve anytime soon. Similar to creative writing, graphic designers need to provide services that are original and are strictly created based on the requirements of the clients. Unless AI systems attain the level of self-awareness, they may never be able to possess the level of creativity required to operate in this profession. 9. Project Managers Part of the job of project managers is to supervise the project from the beginning to the end. To achieve this goal, they must maintain constant communication with all stakeholders and will need to make calls most of the time. Of course, AI can assist project managers in several ways now and in the foreseeable future. In fact, AI-support will boost the success rate of projects. However, we may not be seeing AI solutions that can replace project managers soon. AI systems may not have the needed resources to master the tasks that involve changing schedules that lack any specific pattern.
NEW JOBS TO EXPECT IN THE FUTURE COURTESY OF AI While many are bothered about the number of jobs that AI may likely take over from humans, a good number of people are also not bothered by the possibility of a radical change in the labor market. As I mentioned before, we have already witnessed a similar change when mechanization replaced manual laborers in farms around the world. Interestingly, things turned out to be perfectly fine. Well, it turns out that many jobs that people do today are just too repetitive and boring. Over the years, creative destruction has always been part of our lives as humans and reinvention has equally been with us too. While robots take some jobs away, they will also come up with entirely new ones – the types that may not be as boring as the ones they are taking away. So, what exactly will the new kind of gigs that are likely to emerge within the next ten years or more look like? Here are some exciting ones that will eventually emerge according to Cognizant, an IT service company. Data Detective Jobs We may eventually see ads for creative and skilled persons who can assist in
investigating mysteries in a company's big data. So, those who will be hired may need to interpret what a company's data means as well as the secrets it contains. Individuals with basic data skills and law enforcement experience may be in a good position to fill this kind of position. Also, it could be a fit for a curious data-literate graduate in search of an entry-level job. AI Business Development Manager One of the things that AI may not be able to do any time soon is to sell itself. The process of selling AI (either AI that is packaged into a business service or in its “raw” form) will be executed by human beings for now. So, expect an increase in the number of ads requesting for persons who can sell AI solutions. Master of Edge Computing Most organizations will eventually require the services of someone who can help them to keep up with the ever-increasing speed of computing. So, multi-national corporations may post ads for a person who can completely reimagine their data infrastructure – dropping the data centers and embracing edge computing. Walker/Talker Available data from research indicates that there might actually be new gigs for individuals who are less tech-obsessed – those who can walk, talk and empathize. While the number of jobs that AI and automation are handling continues to increase, we should expect people to live much longer. Consequently, there will also be an increase in the need for those who are unemployed as well as the ones that are underemployed to search for entirely new jobs while the elderly will seek companionship. A job ad we may see in the next ten years or more may be from an organization that connects seniors with conversational companions who will be around them at home. Also, there may be a need for companions for those who need someone to talk and walk with. Fitness Assistants Many persons will still be dealing with their weight challenges but this time, they will have someone to provide support. We may likely require the services of fitness commitment counselors who can assist in analyzing the data that we track with wearable tech. AI-Assisted Healthcare worker You can see this as an updated version of nursing – the AI version of nursing. While the person may possess the same health care skills just like every nurse, there will also be the need for this kind of nurse to be tech-savvy. This will ensure that they can deliver health care services remotely leveraging in-home
testing equipment and telemedicine tools. It would be possible for nurses to diagnose and even treat more health issues with the help of AI while doctors provide support for these technicians and deal with cases that are trickier. Smart City Analyst With more smart cities being set up around the world, we may eventually require cyber city analysts who will ensure that the technologies behind smart cities are in good shape. They will also ensure that there is a steady movement of healthy data around these smart cities – citizen data, asset data and biodata. They will ensure that all technical and transmission equipment are performing properly without any form of compromise. Human-Machine Manager We will eventually get to the era where humans and machines work together. This is where a man-machine team manager comes in. Their duty is to coordinate the activities of the new kind of workforce. Organizations may search for persons who have the skills to combine the benefits provided by AI/robots which includes speed, computation, accuracy and endurance with human strengths – judgment, versatility, cognition, empathy, creativity, etc. in an environment where they all work for the actualization of organizational goals. Perhaps one of the major tasks of such employees would be to create an interaction system that would enable humans and robots to mutually communicate their goals, intentions and capabilities and come up with a task planning schedule for this collaboration. Airways/Highway Controller Within a few years, many cities will be in dire need of highway controllers, especially with the increase in the number of flying cars, drones, and autonomous cars. The role of such workers will be to help in regulating airspace and roads in different cities. They will also monitor, regulate and even manipulate road and air space. Part of the job may include monitoring and programming automated AI platforms that were established for space management of this new class of cars, drones, jetpacks, etc. According to Cognizant, this role will certainly come with a good dose of stress. Back in the days when we witnessed the creation of computers, it was an exciting experience for most people and it assisted business processes. But a good number of people were scared that it would eventually result in acute job losses. Interestingly, after some decades of its creation, computers are now a part of our lives and guess what; new and interesting categories of jobs were created. But with this new kind of job available, there was also a need for reskilling and adapting to an entirely new way of performing work. AI is believed by many to provide us with the same kind of revolution in the tech space. This will also require a similar transformation of the future workforce. We will witness a
drastic change in the way of doing things and there will also be new employment sources. Within the next few decades, AI solutions can perform all well-structured, repetitive and mechanical activities. Human resources will always be needed to periodically upgrade skills to remain useful, but machines may not be able to handle critical thinking.
Chapter 19: Questions to Ask Before Implementing AI in Your Business
Key Takeaway You need a solid framework of key performance indicators (KPIs), objectives as well as a smart data strategy to ensure that your AI solution is implemented in the most valuable way. You need to make AI solutions an integral part of your organization's core business and the management team needs to have a change of mentality. It is not just enough to implement AI solutions in your company. The starting point is to first determine whether your business requires the use of AI systems. When examining what AI can do for your business, you have to examine it not from a technological point of view, but based on its business capabilities. Understand the type of tasks each technology performs, their strength as well as limitations.
If properly and thoughtfully utilized in the right context, AI and machine learning can provide significant benefits to companies and serve as a competitive advantage. Presently digital transformation and its advances have put most companies under immense pressure caused by the fear of lagging. This has also increased the willingness of many organizations to implement new technologies. Well, even when AI has been adopted, most companies still experience fundamental barriers. In fact, the number of companies having the basic components that enable AI to provide tremendous value are few. It is not just enough to implement AI solutions in your company. I strongly believe that the starting point is to first determine whether your business requires the use of AI systems. The best way to start your AI journey is to have an excellent understanding of where AI opportunities are and creating defined strategies for acquiring the data needed by AI. So, before you begin, go ahead and ask yourself these questions: 1. What are the problems you intend to resolve with AI? The first thing you need to do is to define the problem. What precisely is the issue your business is trying to solve? Can a machine learning model deal with it? Are AI systems known to handle such kinds of problem? Your goal is to identify the tasks that are human capital intensive or inefficient. Then ascertain how AI and machine learning solutions can help to resolve such problems. 2. How do you intend to turn AI into an Opportunity? In simple terms, what is the plan of your business to deal with the identified problems and implement the solution? This is the time to reformulate the problem definition by looking at ways to implement the solution profitably without loss of value in the course of the transformation process. 3. What kind of solution does your business need – temporary or permanent? You need to make AI solutions an integral part of your organization's core business and the management team needs to have a change of mentality. You should make a decision on whether to get a standardized solution, a customized solution or a temporary service – this depends on whether your business needs an AI solution for its daily processes or a particular action. 4. Can you meet the AI model's data needs? Based on what we have been discussing so far, you will agree with me that the quality and quantity of data your business has determines the quality of the AI model. Making use of an AI system means training an accurate and tangible data model that can feed the AI solutions. Having quality historical data is essential to enable the AI systems work independently. So, the big question is; does my business have sufficient data? How reliable are the data sources that the AI solution will use? Any robust data architecture?
You need a solid framework of key performance indicators (KPIs), objectives as well as a smart data strategy to ensure it is implemented in the most valuable way. 5. Does the available data exist in digital form? Did you store the data in physical files or digital systems? You need to organize, digitize, centralize and integrate the data in multiple digital tools (like ERPs, SCADAS, CRMs, and several others) or Excels, CSV files, etc. to enable you manage the data accurately. If you fail to do this, then utilizing the available data for AI solutions may turn out to be a difficult task. 6. What are the available resources for AI implementation? You must be realistic when determining the availability of resources that can handle the digital transformation of both – human and financial resources. Do you have the right talent to deploy AI? If not, then where do you get the expert talent? Do you have a budget for getting an AI system? If you must successfully and seamlessly implement an AI solution in your company's internal systems, you must have in place a technical team that does not just know the company but will also know the data scientist or developer. You need qualified teams that are capable of integrating the models that you intend to implement into your company's systems. Factors that will determine the success and accuracy of the AI model include available equipment, budget as well as the time for developing the AI solutions. These are also factors that will determine whether you want to go for an on-demand service or acquire a model that your technical team can implement. 7. What happens if the AI Solution Fails? Despite the fact that AI solutions function through extremely sophisticated statistical correlations and algorithms, a margin of error usually exists. Do you intend to implement AI in a process with high variability and a low accuracy rate or the direct opposite? What are the risks involved and what is the level of financial loss you will suffer if the AI solution fails to work out? You should be able to ascertain how accurate the models are based on the available data and systems to estimate if it is sufficient for you to proceed. 8. How do you intend to integrate AI solutions with the overall strategy of your business? Do you have a blueprint on how to integrate AI with the people and processes in your company? What are the possible turning points where you feel AI may clash with existing processes? You are not going to implement AI as a stand-alone technology; instead, it should be an integrated solution that synergizes with every aspect of your business. This will help maximize your company's productivity and results. So, ask yourself, will the AI solution
function with other departments and teams and detect possible issues that may arise. 9. Will my employees be affected? If yes, in what ways will the change affect them? How will the adoption of AI affect the tasks currently being performed by your existing workforce? Will it lead to the loss of jobs? Of course, in such cases, employees may be very skeptical of change. So, it is crucial that you seek ethical solutions to avoid making the employees lose their motivation and value. To make the transition a smooth one, you need to create programs that will train your employees. 10. What do you expect in return after applying the AI solution? One of the reasons why you want to implement an AI solution is to increase your productivity, reduce costs and boost profits. But how long will it take your business to recover the funds spent on the AI solution? Once the AI solution has been implemented, will it reduce your company's costs? If yes, to what extent? Since the implementation of AI and machine learning solutions involves cost, it is an important investment. This is why you must carry out a realistic estimation to evaluate the parameters of the expected return. Establish possible KPIs while executing this plan to enable you to measure the return and calculate the level of value the AI solution is adding to your company. Undoubtedly, the implementation of AI can open doors to amazing possibilities for your business but it must be deployed properly. It will be a worthless project and you will find no reasonable return on investment if it is implemented as an experiment and without a proper plan of action.
FRAMEWORK FOR INTEGRATING AI TECHNOLOGIES When examining what AI can do for your business, you have to examine it not from a technological point of view, but based on its business capabilities. Although we have earlier talked about the different types of AI, there are three key business needs that AI can support – gaining insight via data analysis, automating business processes and engaging with employees and customers.
Figure 45 Business needs that AI Supports
Process Automation Based on the results of a Harvard survey of 152 projects, the commonest type of process automation was actually the automation of digital and physical tasks – mostly back-office financial and administrative tasks. These activities involved the use of robotic process automation techs (RPA). These robots, function like humans and perform tasks such as: Data transfer from email and call center systems into systems of record. The replacement of lost ATM or credit cards and update of records on multiple systems. Reading legal and contractual files to obtain provisions courtesy of natural language processing. Robotic automation projects may likely result in job losses as technology improves. You can probably automate a task that you can outsource. Cognitive Insight
This is the next most common use of AI systems – detecting patterns in vast volumes of data and also interpreting what they mean. You can see this as “analytics on steroids” because machine learning applications that belong to this category can: Identify insurance claims fraud and also detect credit fraud in real-time. Make predictions regarding what a specific customer may likely purchase. Automate personalized targeting of digital ads. Attempt to identify quality or safety issues in automobiles as well as other manufactured products by analyzing warranty data. Provide a more detailed and accurate actuarial modeling for insurers. Generally, the machine learning-powered cognitive insights are not the same as the ones from traditional analytics. They differ in three ways: The models are mainly trained on certain aspects of the data set. They usually get better – they continue to improve their ability to categorize things or make predictions by using new data. They are often more data-intensive and detailed than traditional analytics. Cognitive Engagement The third type of AI solution which is also the least common type involves projects that engage customers and employees using machine learning, intelligent agents, and natural language processing chatbots. Applications that fall within this category include: Product and services recommendation systems designed for retailers. Applications that help to enhance personalization, engagement and sales – this includes images and rich language. Intelligent agents offering round-the-clock customer service that handles several issues ranging from technical support questions to password requests – all executed in the customer's natural language. Health treatment recommendation solutions that assist care providers in creating customized care plans while considering the health status of each person as well as previous treatments.
Steps for Integrating AI Solutions So, based on these three most common areas where AI solutions are applied, here is a four-step framework you can adopt when integrating AI technologies into your business – both for business-process enhancements and also to increase profitability.
Figure 46: Steps for Integrating AI Solutions
Step I: Understand the Technologies You need to understand the type of tasks each technology performs, their strength as well as limitations. While robotic process automation is transparent, it is not capable of learning and improving. On the other hand, deep learning is an ideal option if you need an AI solution that can learn from large volumes of labeled data. But remember that the “black box” issue is often a major challenge since you may want to know the reason why some decisions are made in specific ways. Step II: Create a Portfolio of projects Remember, during the questions section where you ask yourself about ten questions, one of the questions was the current needs that an AI solution will solve for you. So, once you evaluate the needs and capabilities, go ahead and create a prioritized portfolio of projects. You can execute this either through small consulting engagements or workshops. Also, consider carrying out assessments in three major areas: 1. Identify the opportunities: Find out the areas that will benefit the most from cognitive applications: a. This could be bottlenecks in information flow – a situation where knowledge is not properly distributed in an organization though it exists.
b. Scaling issues: Even when knowledge exists, it is often expensive to scale or use. c. Insufficient resources: In some cases, organizations acquire more data than what their existing infrastructure can adequately analyze and apply. 2. Determine the Use cases: This is the second aspect of assessment that also examines the use cases where cognitive applications can provide reasonable value and boost the success of your business. 3. Select the technology: This is the third aspect that you should look at. At this point, you need to examine whether the AI tools you are considering for every single use case are capable of handling the task. For instance, intelligent agents and chatbots may frustrate your business if they fail to meet up with human problem-solving capability. Cognitive technologies will at some point revolutionize the way companies do business but the smartest thing to do for now is to take incremental steps with the technology currently available as you plan toward the transformational change we will certainly witness in the future. Step III: Launch Proof-of-Concept Pilots Since the gap that exists between AI's current and desired capabilities is not always easy to notice, you need to come up with pilot projects for cognitive applications before you finally spread them to the entire business. This is often suitable if you desire to test several technologies simultaneously. Business-Process Redesign In the course of developing cognitive technology projects, you should also endeavor to think through how you might redesign workflows to focus more on allocating tasks between AI and humans. Machines can actually make about 80 percent of decisions in some cognitive projects while humans will make 20 percent. Also, for other cognitive projects, the opposite ratio will apply. To ensure that humans and machines compensate for weaknesses and support each other’s strengths, systematic redesign of workflows is essential. Step IV: Scaling Up Some companies have actually succeeded in launching cognitive pilots; however, they have not been able to apply them across their organizations. You need to create a detailed plan for scaling up if you intend to achieve your goals. To make this possible, a collaboration between owners of the business process that is being automated and technology experts is essential. In most cases, the process of scaling up also involves the integration with existing systems and processes. You should commence the scaling-up process by determining how feasible or possible the required integration will be and also if the application will rely on a particular technology that is not easy to source. You must ensure that your business process
owners communicate with the IT firm regarding scaling considerations before or in the course of the pilot phase. It is crucial to bear in mind that while scaling up, most businesses may encounter significant change-management issues.
Chapter 20: Maximizing Your AI Experience Key Takeaway A look at the costs of adopting AI and not only the benefits is a balanced way to examine the prospects of using AI. The future benefits of leveraging AI now are exciting and this has the potential to give companies who are regarded as early adopters an edge over those who are still contemplating whether to adopt AI or not. A good number of businesses and organizations have not been able to restructure their business strategies, organizational structures, risk mitigation, talent strategies, and development methodologies to align with our world that is already moving at what experts describe as AI speed. You can enjoy a quick return on your investment by improving your company's efficiency and productivity with AI's advanced automation. If you must leverage AI to support your critical business decisions based on sensitive data, then you must ensure that you are fully aware of exactly what AI is doing and why it is doing it.
HOW TO HANDLE THE TOP FIVE AI TRENDS YOUR BUSINESS WILL FACE You would agree with me that this has been a journey as we examine several aspects of AI and how it is transforming different sectors. So, how exactly can you maximize your AI experience? Indeed, the year 2020 was a tough one for homes, governments and companies around the world. But despite the challenges of the year as a result of COVID-19, more companies are increasing their plans to adopt AI. Records from the latest PwC AI survey reveals that in the United States, for instance, a quarter of organizations participating in the survey reported widespread use of AI which increased from the 2019 figure. Also, about 54 percent of those surveyed are already adopting AI and have even progressed beyond the foundation phase. Interestingly, many of the companies that participated disclosed that they are already enjoying the benefits of AI and this is partly because the technology has been an effective response to the issues that were caused by the global pandemic. Majority of organizations that adopted AI fully, disclosed that they enjoyed remarkable benefits. Despite this positive news, there is still one challenge that looms. A good number of AI investments have not been able to provide tangible results but have only managed to appear simply as “pretty shiny objects” without real benefits. A good number of businesses and organizations have not been able to restructure their business strategies, organizational structures, risk mitigation, talent strategies, and development methodologies to align with our world that is already moving at what experts describe as AI speed. Check out the details showing how far companies have progressed with the adoption of AI. The figures below are based on the response of respondents when asked “To what extent is your company looking to integrate AI technologies into its operations?”
Figure 47: Source: The PwC 2021 AI Predictions Base
Based on this result, it is obvious that a lot of work needs to be done but the good news is that the work will undoubtedly yield remarkable benefits today and will serve as a strong foundation for success in the future.
STRATEGIES FOR MAXIMIZING AI To ensure that you maximize your AI experience, these strategies will be highly useful. Understand that there is no Uncertainty The starting point is for you to know that the AI trend is clear – more companies in the United States and around the world are increasing their AI investments. The PwC survey focused on the US and while I will be using it as a case study, you should bear in mind that the results are not very different in other countries. Back to the results of the PwC survey, about 52 percent of respondents disclosed that they have already accelerated their AI adoption plans, especially with the COVID-19 challenges. Of course, the result of this increased adoption will manifest in years to come. Here is a breakdown of how these companies “accelerated” their adoption: New AI Use cases (40 percent) Increased AI investments (40 percent) Also, 86 percent of all those who participated in the survey stated that in 2021, AI will be a “mainstream technology” for their organizations.
Some of the concrete benefits that companies already enjoy courtesy of AI include improved decision-making, revenue growth and enhanced customer experience. Results from the PwC survey also confirmed that the organizations that have already launched AI across their organization are positive about growth amid the COVID-19 pandemic. About 25 percent of respondents expect revenue growth against the 18 percent figure for all companies. The truth is that the future benefits of leveraging AI are exciting and this has the potential to give companies who are regarded as early adopters an edge over those who are still contemplating whether to adopt AI or not. In fact, the edge is so much that their competitors may never be able to meet up or overtake them in the future. Companies that are AI leaders are already establishing a virtuous cycle sometimes regarded as a “flywheel.” The adoption of AI results in better products, improved productivity and excellent customer experiences. Superior customer experiences will in turn lead to more customers who end up sharing more of their data.
Figure 48: AI flywheel
Increased volume of data will further help to build smarter AI algorithms which will in turn help to create better products and experiences. With better products, companies will still attract more customers who will continue to share their data leading to the creation of even smarter AI systems. The fact that many firms that have already adopted AI are enjoying more rewards than those who are still exploring ways to get involved is proof that this “flywheel” truly exists. However, I must point out that being able to attain this exciting cycle is not an easy task. A look at the costs of adopting AI and not only the benefits is a balanced way to examine the prospects of using AI. According to a PwC survey, 76 percent of companies surveyed are struggling to break even on their AI investments. But you would also agree with me that breaking even is still an excellent result when you consider the fact that such an investment could be the company's future foundation.
However, your focus should be on smarter ways to invest to ensure that you get better returns both now and in the future. Here are ways to maximize your AI investments: Create virtuous cycles for data and talent: Once you have successfully trained your chosen AI solution on standardized and cleansed data, rest assured that the AI will start to extract and standardize more data on its own – extracting all the information it requires both from physical and digital sources. What if you already have an AI talent that has experience in developing high-performing algorithms? Well, that is also an excellent option and you can increase your profits and lead in innovation – further attracting more highly skilled individuals. Take note of the cost: You need to bear in mind that AI costs exceed having the right talent. It involves investing in gathering, cleansing and labeling data. Also, AI demands high computing power which means that you will be investing in technology too. However, assessing the costs accurately will ensure that you channel your AI investments mainly to the applications that have genuine business value. Select the Appropriate Operating Model: You also need to select an AI operating model that will guarantee a consistent approach to governance, data and model use in your organization. An excellent option would be a centralized hub, but note that there are other great options too. Organizations can enjoy significant rewards just for embedding AI capabilities in their business units as long as they have AI-savvy managers and well-structured governance in place.
MAKE AI YOUR STRATEGIC ALLY You can maximize your AI experience and enjoy a quick return on your investment by improving your company's efficiency and productivity with AI's advanced automation. Of course, no one hates a fast ROI and interestingly, it is among the top goals of AI strategies. However, other things are increasingly getting important due to increased innovation as well as revenue growth. This is further making it crucial for you to make AI an ally while making strategic decisions. Records from a PwC survey show that 58 percent of respondents disclosed that they have increased their investments in AI specifically for workforce planning.
Figure 49: How Organizations are handling the AI Talent Issue, PwC Survey
Also, 48 percent of them revealed that they are ramping up investments for simulation modeling as well as for supply chain resilience. About 43 percent disclosed that they are increasing their investments in AI mainly for scenario planning while another 42 percent added that they are upping investments for demand projection. All these investments are capable of making AI a strategic ally as it can help bridge the gap between idea and actual execution resulting in faster and improved decisions. One of the factors that have propelled many companies to accelerate the advanced use of AI is the COVID-19 pandemic and it actually benefited businesses during the pandemic and this payoff will certainly continue in several years to come. AI has what it takes to detect possible changes that may likely take place in the market as well as potential risks to supply chain when it has the right data and models. In fact, AI can think through several options for your investments, workforce as well as market strategies. It can assist you in making decisions and acting appropriately while also monitoring and improving its performance continually. Interestingly, AI's dynamic sense, think and act approach to strategy is what makes it possible and within reach on a daily basis. So, how exactly do you ensure that AI is your ally? 1. Start by making strategy a game: With AI, you can actually gain access to a variety of scenarios for your organization because it can model future conditions and the kind of impact they will have on your business. AI can also help to assess multiple responses (both in supply chains, go-to-market or workforce) that may work for you. Even in highly uncertain situations, the results will be strategic, data-driven decisions. 2. Keep strategizing: It is crucial that you become a strategic executor since AI will be constantly ingesting data and providing strategic forecasts and models. Continue to rethink and make changes to your strategy and avoid
restricting this exercise to just once each year. 3. Make your operations bulletproof: AI has the potential to boost resilience. This is because it continuously detects new threats as well as opportunities, quickly thinking of the kind of impact they will have on your business and acting as quickly as possible. This is how AI helps to mitigate disruptions and allows you to take advantage of new opportunities as fast as possible.
MOVE FROM BEING AWARE OF RISKS TO TAKING ACTION AGAINST RISKS It is good to know that most companies are aware of the risks associated with the use of AI. However, the bad news is that most organizations are not doing enough to mitigate such risks. For instance, 50 percent of respondents to the PwC survey revealed that AI is their top choice when it comes to their top three priorities for AI application in 2021. They disclosed that AI was used to improve explainability, privacy, governance and bias detection. However, the case was quite different when it comes to action. Just a third of them disclosed that they have plans to ensure that AI is more explainable, lower the level of bias, enhance its governance, monitor AI's performance, develop and report on AI controls, ensure that it fully complies with privacy regulations and enhances its defenses against threats of cyberattack. The only way to Mitigate AI risks is to Embrace Responsible AI If you must leverage AI to support your critical business decisions based on sensitive data, then you must ensure that you are fully aware of exactly what AI is doing and why it is doing it. Is AI in any way violating someone's privacy? Is it really making valid and bias-aware decisions? Is it possible for you to govern and monitor what this very powerful technology is doing? The truth is that AI's technology, data as well as talent appears to be highly spread across multiple functions and third parties. Since AI will continue to learn and change itself, it is extremely crucial that you also continually monitor AI (as well as its data) starting from the beginning of model design, to the stages of development, deployment as well as continuous modifications. Another challenge you should consider is the fact that AI is a complex technology that a good number of company executives – including IT experts and risk officers are yet to properly understand – remember I earlier defined the “black box.” So, if you are thinking of using AI or if you have already started using AI, then you need to make it responsible immediately.
Making your AI responsible So, how exactly do you make your AI responsible? Here are some helpful suggestions
to start with and always find out more information to ensure you mitigate AI risks. 1. Make an Assessment of your risks and have a plan to test and monitor: How does AI affect your reputational, financial and operational risk in any location you (as well as your partners) are making use of it? You need to closely examine the effect of AI on these aspects, then make the necessary updates regarding the way you use it accordingly to ensure that the controls cover all stages of the AI life cycle. This is an excellent way to increase trust in your AI program. 2. Your governance should be dynamic: Remember, AI does not stop learning; for it to function properly, it is always learning and transforming itself. So, it is equally important for your governance to perform at the same speed as AI. This implies that your responsible AI toolkit has to remain active to effectively monitor model performance, new sources of risks, potential for bias and will also continue to adapt. 3. Operationalize your ethics: You have to establish the right frameworks as well as toolkits that will continuously assess current and planned AI models to ensure that they are robust, explainable, ethical and fair. If you fail to do this, then your AI which needs to represent your values may automatically betray them.
You should go Beyond Upskilling Upskilling is essential for every organization that desires to not only stay in business but remain highly competitive. However, in terms of matching the demands of an AIcentered working environment, it is just not enough. It is predicted that one of the longterm impacts of AI is net job growth, however, as I earlier mentioned, the kind of job that is being expected differ from the ones we have always known before. It is crucial for business leaders to start reevaluating precisely the kind of workforce that will be required in the future. On the part of employees, this is the right time to consider the kind of jobs that will be highly needed in the future for AI-centered workplaces. You can check out the jobs that may likely be taken over by AI as well as the ones that AI will not be handling any time soon in chapter 18. New jobs will definitely affect the tech teams of a company and this means that members of the tech team must adapt by learning new ways of thinking and working. AI model development differs significantly from software development. When it comes to software, it is often rules-based and mainly adheres to unchanging rules in transforming data (like invoices) into output (payments). On the other hand, AI models are always changing and they do not function with certainties but with probabilities. An AI model may examine both data and output in a bid to continuously adapt to new invoice formats and vendors and even make adjustments to its rules to enable it make predictions regarding the probable size of future invoices. AI's nature which is always changing and learning imply that agile software development with its rigid handoffs and linear iterative approach will not really work.
What AI teams need to do is to constantly test, experiment and learn just the way scientists do. This strategy will with time, not only guide your AI and tech teams but will guide your entire workforce. It is possible for your business to get there, however, now is the right time to act.
BUILDING AN AI-READY WORKFORCE As we gradually wrap things up, here are also some tips to help you establish an AIready workforce. 1. Focus on Hiring for the hottest jobs of the year: If your organization is establishing its AI, then there would be a need for both machine learning and model ops engineers with the kind of skills that cuts across data science and software engineering. The job of machine learning engineers is to assist in integrating, scaling and deploying models. On the part of model ops engineers, they monitor and enhance post-deployment model performance and stability. 2. Democratize carefully: It is crucial to democratize AI since it will assist in reducing repetitive tasks at various levels of your organization while increasing innovation courtesy of plug-and-play AI solutions. However, if AI democratization must work for you, then you must offer the right training and governance. Also, you may need to allow data scientists and engineers to be in charge of the democratization for more risky and sophisticated models and use cases. 3. Cultivate an AI culture: It is also crucial for you to establish or cultivates an AI culture since your staff will be working more with data. They will be cultivating an experimental mindset that questions and always wants to improve data and models.
Keep Reorganizing for AI AI reorganization does not just involve breaking down silos; a cultural shift is required to ensure that the decisions made by all stakeholders are based more on data as well as the simulations and forecasts that are produced from data courtesy of AI. AI reorganization also requires the integration of intelligent machines into your organization. You can constantly improve the quality of your decisions when you have AI models that are always improving themselves. But it is important that your business is prepared to pivot fast – not just based on an annual planning cycle – instead, consider organizational flow charts that can meet up with such speed. Of course, you may assume that this kind of organizational transformation is nearly impossible, but it is something that needs to happen. There are various AI and analytics applications in existence, however, the easiest one is automating routine tasks but even though it is the easiest, it is no longer top on the list of priorities for companies.
A survey by PwC revealed that in 2020, 35 percent of businesses disclosed that automating routine tasks was a top priority. But this number has dropped to 25 percent in 2021. Why the significant drop? Is it because automating routine tasks is no longer a profitable use of AI? Of course not. The truth is that many organizations are moving beyond just automating routine tasks and are now having new priorities which include a more strategic use of AI. This also explains why reorganization is required. Take a look at the top-rated AI applications as of 2021 Figure 50: Source PwC Survey
Reorganizing for AI To effectively reorganize for AI, you need to consciously focus on AI, analytics and automation as part of a unified strategy. You can do this via centralized governance or hub to help boost your ability to monetize data, establish a data-driven business culture and lower your risks in the course of doing business. You need to provide AI with the appropriate technical foundation it requires to excel. This also includes a platform architecture that is designed for your business processes, unique data sources and use cases. While some companies would prefer establishing this platform in-house, others may choose third-party providers as a more cost-effective option. Regardless of your choice, just ensure that you choose the one that is suitable for your business. If you must discover and take advantage of new business opportunities offered by AI simulations and forecasts, then you require continuous collaboration between data scientists, engineers, staff and line-of-business managers. Also, endeavor to set up clear lines of communication to prevent possible friction.
Conclusion Indeed, this has been an interesting journey and I hope you had a nice time reading and you were able to get tangible insights that will help you move forward as a business owner, an employee or even an investor. We started by looking at the history of AI, how it has evolved over the years and the difference between AI and machine learning. Also, we looked at the meaning of machine learning and the various ways it is being used by data scientists to train AI algorithms. Other concepts we looked at include the combination of AI and other emerging technologies like blockchain, the Internet of Things, Big Data and Robotics. AI is already transforming different industries – healthcare, agriculture, transportation, advertising, etc. Are you an owner of a small company, a major corporation or an employee working in an industry that is not related to IT? Regardless of the size of your business, the kind of work you do, the services or products you provide; the AI toolkit has something that will certainly help to move your life forward positively. I have examined some of the ways AI can help different industries but the truth is that I have not been able to cover as many sectors as possible due to the scope of this book. AI and machine learning will undoubtedly place you ahead of your competitors, especially those who are not fast enough to embrace it. As an employee, being aware of how AI will change the labor market will help you strategically position yourself to get a better job even if you were displaced by automation. You will benefit immensely by having an AI mindset which is helpful if you want to apply machine learning to your business data. Having an AI mindset will help you create a roadmap for the implementation of an AI solution in your organization. It will also assist you when you want to perform analysis and get unique AI-generated insights. Now that you have finished reading this book, congratulations. But I have one question for you – what next? What are you going to do with the information you just got from this book? Are you going to just enjoy the information concerning the current and future exploits of AI or are you going to ask the question, where do I start from? I have already created a roadmap for you to make things easier for you. So, I encourage you to take action immediately. It is believed that how fast you take action on a new idea determines whether you will take action at all or not. The possibility of procrastinating may likely increase when you spend time thinking about whether to act on what you just read or not. AI is data-hungry and the more data you have, the more accurate your AI solution will be at making predictions. Go ahead and make that move – contact an AI specialist and discuss your business needs with them. In the next decade, most of what we currently talk about as possible achievements in the AI space will likely be realized or even exceeded. Individuals and organizations that take action now are likely going to move ahead of those that fail to take AI seriously. Where do you want to belong?!
References Accenture. (2018). Artificial Intelligence, Genuine Impact: Public services in the era of artificial intelligence. Available at: https://www.accenture.com/_acnmedia/PDF-75/Accenture-AI-Genuine-Impact-Pov-Final-Us.pdf#zoom=50 Agrawal, A. (2021). Artificial Intelligence And The Internet Of Things (AIoT) Hold The Promise Of A More Connected Future, Customer Think. Available at: https://customerthink.com/artificial-intelligence-and-the-internet-of-things-aiothold-the-promise-of-a-more-connected-future/ Bergman, M. (2016). Why the Resurgence in AI? Available at: https://www.mkbergman.com/1915/why-theresurgence-in-ai/ Bernhard, A. (2020). The flying car is here – and it could change the world. Future Inc. Available at: https://www.bbc.com/future/article/20201111-the-flying-car-is-here-vtols-jetpacks-and-air-taxis Bertrand, A. (2020). Why AI and the public sector are a winning formula, EY. Available at: https://www.ey.com/en_gl/consulting/how-ai-can-be-a-force-for-good-in-government Biswas, S., Carson, B., Chung, V., Singh, S., & Thomas, R. (2020). AI-bank of the future: Can banks meet the AI challenge? Mckinsey. Available at: https://www.mckinsey.com/industries/financial-services/our-insights/ai-bank-ofthe-future-can-banks-meet-the-ai-challenge Christopher Rigano. (2019). “Using Artificial Intelligence to Address Criminal Justice Needs,” NIJ Journal 280, https://www.nij.gov/journals/280/Pages/using-artificialintelligence-to-address-criminal-justice-needs.aspx. Columbus, L. (2021). 10 Ways AI Has The Potential To Improve Agriculture In 2021, Forbes. Available at: https://www.forbes.com/sites/louiscolumbus/2021/02/17/10-ways-ai-has-the-potential-to-improve-agriculture-in-2021/ Deshmukh, D. (2021). 12 jobs that robots (AI) will replace in the future, and 12 that won’t, Saviom. Available at: https://www.saviom.com/blog/12-jobs-that-robots-ai-will-replace-in-the-future-and-12-that-wont/ Deloitte. (n.d). How Artificial Intelligence is Transforming the Financial Services Industry, Deloitte. Available at: https://www2.deloitte.com/za/en/nigeria/pages/risk/articles/how-artificial-intelligence-is-transforming-the-financialservices-industry.html Faggella, D. (2020). AI in Agriculture – Present Applications and Impact, Emerj. Available at: https://emerj.com/aisector-overviews/ai-agriculture-present-applications-impact/ Heath, N. (2020). What is machine learning? Everything you need to know, ZDNet. Available at: https://www.zdnet.com/article/what-is-machine-learning-everything-you-need-to-know/ Hodjat, B. (n.d). The AI Resurgence: Why Now? Wired. Available at: https://www.wired.com/insights/2015/03/airesurgence-now/ Hurwitz, J., & Kirsch, D. (2018). Machine Learning for Dummies, IBM Limited Edition. Available at: https://www.ibm.com/downloads/cas/GB8ZMQZ3 IBM. (2020). Deep Learning, IBM. Available at: https://www.ibm.com/cloud/learn/deep-learning
Kaput, M. (2021). Artificial Intelligence In Advertising, Marketing Artificial Intelligence Institute. Available at: https://www.marketingaiinstitute.com/blog/ai-inadvertising#:~:text=AI%20for%20advertising%20has%20the,actually%20go%20one%20step%20further.&text=The%20tool%20uses%20 Lenniy, D. (2021). Artificial Intelligence in Agriculture: Rooting Out the Seed of Doubt, Intellias. Available at: https://www.intellias.com/artificial-intelligence-inagriculture/#:~:text=AI%20can%20provide%20farmers%20with,minimizing%20the%20use%20of%20resources. Lippert, J., Gruley, B., Inoue, K., & Coppola, G. Bloomberg Businessweek. Available at; https://www.bloomberg.com/news/features/2018-09-19/toyota-s-vision-of-autonomous-cars-is-not-exactly-driverless
Martin, D. (2021). 8 Benefits of Using AI for Cybersecurity, Cyber Management Alliance. Available at: https://www.cmalliance.com/cybersecurity-blog/8-benefits-of-using-ai-for-cybersecurity Mordor intelligence. (n.d). GLOBAL ROBOTICS MARKET - GROWTH, TRENDS, COVID-19 IMPACT, AND FORECASTS (2021 - 2026), Mordor Intelligence. Available at: https://www.mordorintelligence.com/industryreports/robotics-market Owen-Hill, A. (2021). What's the Difference Between Robotics and Artificial Intelligence? Available at: https://blog.robotiq.com/whats-the-difference-between-robotics-and-artificial-intelligence Phaneuf, A. (2020). Artificial Intelligence in Financial Services: Applications and benefits of AI in finance, Insider. Available at: https://www.businessinsider.com/ai-in-finance?r=US&IR=T Porter, B. (2019). Why You Need AI Now (in a nutshell), Medium. Available at: https://medium.com/swlh/why-youneed-ai-now-in-a-nutshell-45ba7a10faf0 Pupillo, L., Fantin, S., Ferreira, A., and Polito, C. (2021). Artificial Intelligence and cybersecurity: Benefits and Perils, CEPS. Available at: https://www.ceps.eu/artificial-intelligence-and-cybersecurity/ Schroer, A. (2021). Artificial Intelligence in Cars Powers an AI Revolution in the Auto Industry, Builtin. https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry Segal, E. (n.d). The Impact of AI on Cybersecurity, IEEE Computer Society. Available at: https://www.computer.org/publications/tech-news/trends/the-impact-of-ai-on-cybersecurity Statistica. (2020). Internet of Things (IoT) and non-IoT active device connections worldwide from 2010 to 2025, Statistica. Available at: https://www.statista.com/statistics/1101442/iot-number-of-connected-devicesworldwide/#:~:text=The%20total%20installed%20base%20of,that%20are%20expected%20in%202021. Stillman, J. (2017). 21 Future Jobs the Robots Are Actually Creating, Inc. Available at: https://www.inc.com/jessicastillman/21-future-jobs-robots-are-actually-creating.html Sullivan, T. (2018). 3 charts show where artificial intelligence is making an impact in healthcare right now, HIMSS. Available at: https://www.healthcareitnews.com/news/3-charts-show-where-artificial-intelligence-making-impacthealthcare-right-now https://ec.europa.eu/jrc/communities/sites/jrccties/files/eedfee77-en.pdf