169 64 1MB
English Pages XIII, 50 [56] Year 2020
Diana Bengel
Organizational Acceptance of Artificial Intelligence Identification of AI Acceptance Factors Tailored to the German Financial Services Sector
Organizational Acceptance of Artificial Intelligence
Diana Bengel
Organizational Acceptance of Artificial Intelligence Identification of AI Acceptance Factors Tailored to the German Financial Services Sector
Diana Bengel Herrenberg, Germany This thesis has been written as part of the master studies at the Leeds University Business School and got awarded with the Society of Advancement of Management Studies Prize 2018.
ISBN 978-3-658-30793-6 ISBN 978-3-658-30794-3 (eBook) https://doi.org/10.1007/978-3-658-30794-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Responsible Editor: Carina Reibold This Springer Gabler imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH part of Springer Nature. The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany
Acknowledgements
Firstly, I would like to give special thanks to my supervisor Dr. Alistair Norman from the Leeds University Business School for his continuous support, for his constructive criticism and advices and for his patience. His guidance helped me to proceed smoothly with the dissertation and to stay on track. Furthermore, I would like to thank all interview partners for their willingness to participate and for sharing personal views, practical insights and impulses. I enjoyed conducting the interviews and exploring new ways of thinking. I would like to express my gratitude to Kristin Häußermann, Marion Ultsch, Helen Wrona, Laura Trickl and Wolfgang Bengel for taking their time to proofread my work. I am also grateful to David Lorenz, Christopher Duprée, Marcello Davi and Oliver Keinath, who leveraged their personal network to find interview partners. Lastly, a special thanks goes to my manager Barbara Koch, who showed a great deal of understanding and gave me some time off from work to finish the dissertation. Most importantly, I would like to thank my family and my boyfriend for providing me with unfailing support and continuous encouragement throughout my years of study and through the process of writing this dissertation. Thanks for all your encouragement!
v
Abstract
Organisations are relying on technological advances like Artificial Intelligence to ensure their market success and existence. The successful introduction of new technologies into organisations highly depends on their acceptance. Existing concepts like the diffusion of innovation or the technology acceptance model do not address the organisational context sufficiently and do not exclusively focus on artificial intelligence. Hence this dissertation aims to provide an overview of determinants influencing the acceptance of artificial intelligence in an organisational context in the German financial services industry. To achieve practical insights, 17 qualitative expert interviews are conducted to capture their perceptions about potential influencing variables to confirm existing and explore new factors. The research found out that the acceptance of artificial intelligence is influenced by multiple, interrelated variables. The resulting model presents 29 acceptance factors that are divided into five perspectives: organisational (e.g. corporate culture and communication); individual (e.g. fear of job loss and mindset); financial (e.g. cost savings and productivity gains); technological (e.g. training efforts and functionality) and societal (e.g. data privacy implications and public perception).
vii
Contents
1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Relevance of the Research Topic. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Research Question and Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Dissertation Outline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Literature Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 Disruptive Technology Innovations. . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Artificial Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 Definition and Demarcation of AI. . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Application Areas of AI in the FSS. . . . . . . . . . . . . . . . . . . . 10 2.2.3 Perceived Benefits and Challenges of AI. . . . . . . . . . . . . . . . 11 2.3 Technology Acceptance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Technology Acceptance Model . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.2 Other AI-Related Factors. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.3 Artificial Intelligence Acceptance Model. . . . . . . . . . . . . . . . 16 3 Methodology of Research. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1 Research Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Research Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2.1 Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2.2 Interview Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2.3 Sample Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.4 Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3 Research Ethics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4 Research Findings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.1 Analysis and Discussion of Interview Findings. . . . . . . . . . . . . . . . . 27 4.1.1 Understanding of AI and Application Areas . . . . . . . . . . . . . 27 4.1.2 Perceived Benefits and Drawbacks . . . . . . . . . . . . . . . . . . . . 28
ix
x
Contents
4.1.3 Organisational Factors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.1.4 Individual Factors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.1.5 Financial Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.6 Technological Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1.7 Societal Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2 Adaption of the Artificial Intelligence Acceptance Model . . . . . . . . 37 5 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.1 Reflection and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.2 Limitations and Opportunities for Future Research. . . . . . . . . . . . . . 40 List of References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
List of Abbreviations
AGI AI AIAM AML ANI ASI DL FSS GDPR IT KPI ML TAM TPB TRA UTAUT
Artificial General Intelligence Artificial Intelligence Artificial Intelligence Acceptance Model Anti-Money Laundering Artificial Narrow Intelligence Artificial Super Intelligence Deep Learning Financial Services Sector General Data Protection Regulation Information Technology Key Performance Indicator Machine Learning Technology Acceptance Model Theory of Planned Behavior Theory of Reasoned Action Unified Theory of Acceptance and Use of Technology
xi
List of Figures
Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 3.1 Figure 4.1
Roger’s Innovation Adoption Curve. . . . . . . . . . . . . . . . . . . . . . 7 Distinction between ANI, AGI and ASI . . . . . . . . . . . . . . . . . . . 8 Differentiation between AI, ML and DL. . . . . . . . . . . . . . . . . . . 9 Technology Acceptance Model. . . . . . . . . . . . . . . . . . . . . . . . . . 14 Initial Artificial Intelligence Acceptance Model. . . . . . . . . . . . . 16 Interview Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Final Artificial Intelligence Acceptance Model. . . . . . . . . . . . . . 38
xiii
1
Introduction
1.1 Relevance of the Research Topic Hardly any other research field has lately attracted as much attention as Artificial Intelligence (AI) (Fraunhofer, 2018). It has often been the subject of discussion over the past years, with many opinions of what it is, what it is supposed to do and where it is going (Innovation Center Denmark, 2018). The term AI describes the ability of machines to think and imitate intelligent human behavior (Merriam-Webster, 2018). AI has been identified by Gartner (2017a) as the biggest strategic technology trend for 2018 and promises to be the most disruptive technology in the next decade. It is supposed to transform the world and change lives of millions of people by addressing the world’s most intractable challenges, from climate change, over poverty to diseases (European Political Strategy Centre, 2018). At an organisational level, AI is expected to revolutionise the way businesses of all industries operate (Innovation Center Denmark, 2018). According to a recent forecast, the global business value of AI is estimated to reach $1.2 trillion in 2018, representing an increase of 70% compared to 2017 (Gartner, 2018). In an advancing digital world, the progress and enthusiasm for AI is driven by three factors that build upon each other: 1) the increasing volume, velocity and variety of structured and unstructured data from various sources, 2) more sophisticated AI algorithms and 3) faster, cheaper and stronger computing power (National Science and Technology Council, 2016; Innovation Center Denmark, 2018). Anyone who uses Google search, asks Apple’s Siri about the weather or requests that Amazon’s virtual assistant Alexa plays music, exploits AI capabilities (Sharma, 2017). However, the immense potential of AI can only be harnessed, if the importance of human beings is not neglected in the change process (Kessler & Martin, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 D. Bengel, Organizational Acceptance of Artificial Intelligence, https://doi.org/10.1007/978-3-658-30794-3_1
1
2
1 Introduction
2017). Besides technical difficulties, errors in project management and missing training possibilities, failures in adopting new technologies also include low acceptance by employees and the resulting lack of usage (Thim, 2017). Various researchers have developed acceptance models that address the adoption of new technologies (Meuter et al, 2000; Chin, 2015). The most widely used is the Technology Acceptance Model which serves as basis for the research. It usually serves as quantitative tool to prove correlations, whereas in this investigation it is used as qualitative instrument to reveal possible acceptance factors. AI adoption depends greatly on the industry with regard to its market size, challenges, willingness to pay and digitalisation level (Batra et al, 2018). The author focuses on the Financial Services Sector (FSS) because of existing contacts from the work environment. In this context, a rising demand and openness towards AI from banks and insurance companies is experienced. Financial services customers are increasingly empowered and cost-conscious while demanding better quality, new digital experiences and personalised services (Storholm, 2018; Boonsiritomachai & Pitchayadejanant, 2017). The FSS faces diminishing margins and constant pressure to reduce operational costs by streamlining processes while digitalising their business models (PwC, 2017). In addition, competition continuously increases with start-ups in the financial technology space, so-called fintechs, who create new digital offerings across various channels. At the same time, financial institutions have to manage rapidly growing volumes of financial transaction data (Fraunhofer, 2018). Apart from that, the regulatory environment grows in size and complexity: since 2017 alone, 500 new legal norms have been added to the German market, representing an increase of 20% compared to the previous year (Zissner, 2018). Besides, fraud and cyber data security attacks are continually increasing. Within this ever-changing landscape, banks and insurances must transform themselves radically and address the challenges with innovative and disruptive approaches. Therefore, they are continually exploring ways to leverage technologies like AI to redefine their products, processes and strategies as well as to reduce costs, drive automation, enable data-driven decisions and improve customer experience (Bashforth, 2018).
1.2 Research Question and Objectives The acceptance of technology and its influencing factors from a business-toconsumer perspective have already been subject of research, whereas almost none has been conducted about the role of organisations and its employees in the adoption process from a business-to-business perspective (Thim, 2017). As the author
1.3 Dissertation Outline
3
is working in the German sales department of IBM’s AI solution Watson for the FSS and is experiencing difficulties to position it in this industry, it is of high interest to analyse the organisational perception and acceptance of AI in the FSS to drive higher adoption. As a result, this dissertation addresses the following research question: What factors influence the organisational acceptance of artificial intelligence in the German financial services sector? The main objectives of the dissertation are to: i. understand the nature of artificial intelligence and its manifestation in financial services. ii. identify major factors of influence on the organisational acceptance of artificial intelligence in the German financial services sector. iii. create an artificial intelligence acceptance model based on the technology acceptance model and literature findings. iv. verify and adapt the developed framework by conducting expert interviews.
1.3 Dissertation Outline The dissertation is structured into five major sections. The first chapter lays the foundation by introducing the research topic and outlining research objectives. Chapter 2 reviews disruptive technology innovations to put the research topic into an overall context. Subsequently, AI is defined, its main application areas in the FSS are presented and perceived benefits and challenges are analysed from an organisational point of view. Afterwards, the popular technology acceptance model (TAM) is reviewed and AI-specific acceptance factors are considered to develop an Artificial Intelligence Acceptance Model (AIAM). After the literature review, Chapter 3 deals with the chosen research methodology, comprising research philosophy, research design and research ethics. The following Chapter 4 outlines and critically discusses the findings of the qualitative analysis in form of expert interviews to subsequently verify and adapt the previously developed AIAM. The final Chapter 5 reflects on the results and discusses limitations and opportunities for further research.
2
Literature Review
This chapter presents literature findings, which build the foundation for the empirical investigation. Section 2.1 highlights aspects of disruptive technology innovations like AI. Section 2.2 gives an overview about what AI is, how it is already used in the FSS and what amplifies and hinders its usage from an organisational point of view. Finally, Section 2.3 discusses the TAM and identifies further factors relevant for AI acceptance in order to develop an Artificial Intelligence Acceptance Model.
2.1 Disruptive Technology Innovations According to Porter (2008), the dynamics of markets have completely changed due to globalisation, digitalisation and technology innovations. Organisational success mostly depends on the organisational capability to transform by radical or incremental improvements. However, it can be observed that the launch of new technical innovations often fails due to deeply-rooted structures, cultures and routines of organisations (Washington & Hacker, 2005). Those companies that adapt quickest to the changing business environment create a competitive advantage, while those that refuse to change get left behind (Dahlman, 2007).
Electronic Supplementary Material The online version of this chapter (https:// doi.org/10.1007/978-3-658-30794-3_2) contains supplementary material, which is available to authorized users. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 D. Bengel, Organizational Acceptance of Artificial Intelligence, https://doi.org/10.1007/978-3-658-30794-3_2
5
6
2 Literature Review
For decades, financial services firms relied on incremental improvements of their offerings and were not required to create innovations that shape markets and business models (Berry et al, 2006). Enabled by technologies, a growing number of new market players such as fintechs, offer services that continuously challenge established financial services companies to come up with new ideas (Das et al, 2018). The theory of ‘diffusion of innovations’ builds the foundation for innovation acceptance among individuals and organisations. According to Rogers (1995) an innovation can be understood as an idea, habit or item that is perceived as new. Further, diffusion is defined as a process that uses numerous channels to communicate innovation to representatives of a social system over time (Rogers, 1995). The diffusion of innovation model covers three major areas, which are presented below (Taherdoost, 2018). I. Innovation decision process Adoption happens after going through an innovation decision process 1) from first knowledge of the innovation 2) to its persuasion 3) to a decision to adopt or reject 4) to the implementation of the new idea 5) and to the confirmation of this decision. II. Adopter characteristics An S-shaped adoption curve was developed, as every new technology goes through an adoption life cycle. As illustrated in Figure 2.1, Rogers (1995) identified five categories of innovation adopter characteristics: Innovators, Early Adopters, Early Majority, Late Majority and Laggards. The theory states that 2.5% of the population are innovators, the next 13.5% early adopters, then 34% early majority, followed by 34% late majority and 16% laggards. Ram & Sheth (1989) explain that every adopter group differs regarding the level and type of resistance, which in turn affects the adoption time. Organisations should be at least early majority in using new technologies to not fall behind in today’s dynamic business environment (Dahlman, 2007).
2.2 Artificial Intelligence
7 100 75
25 2,5 %
13,5 %
Innovators
Early Adopters
34 %
34 %
Early Majority
Late Majority
16 %
Market Share [%]
50
0
Laggards
Figure 2.1 Roger’s Innovation Adoption Curve. Source Own depiction based on Rogers (1995)
III. Innovation characteristics The third classification of Rogers (1995) involves five characteristics of an innovation that determine its acceptance. The ‘Relative Advantage’ describes the degree to which the innovation offers improvements over existing systems. Secondly, ‘Compatibility’ expresses the similarity with existing norms among users. In the third place, ‘Complexity’ shows the ease of use of the innovation. ‘Trialability’ is the opportunity to try and test before committing to its use. Lastly, ‘Observability’ exists when the innovation has visible results.
2.2 Artificial Intelligence 2.2.1 Definition and Demarcation of AI Already in the 1950s, AI has been recognised as field of study, as it was believed that machines would reason like humans one day (Harris, 2010). However, when talking about AI nowadays, each person has an own preconception of what it is and what terminologies to use (Whitby, 2009). The basic understanding is similar by emphasising human intelligence, whereas the detailed comprehension shifts depending on who provides the definition (Marr, 2018a). According to Oxford Dictionary (2018), computer systems are capable of carrying out tasks that typically require human intelligence. Bench-Capon & Dunne (2007) see the value
8
2 Literature Review
of AI in supporting humans in repetitive tasks and consequently in a harmonious coexistence of people and machines. AI systems are able to understand and process natural language or text, recognise images, translate languages, convert speech to text or text to speech and make conversations (Hof, 2013). This leads to their core capability of solving problems by identifying patterns, drawing conclusions and making decisions (Shabbir & Anwer, 2015). The distinctive feature of AI is its self-learning nature as it improves with each interaction and enhances its capabilities and knowledge (Adams, 2017). According to Scardovi (2017), AI should 1) have the capabilities to process natural language. 2) have machine learning capabilities in order to learn from every interaction without explicit prior programming. 3) be able to generate hypothesis by leveraging predictive capabilities which are verified by evidence. As presented in Figure 2.2, between three forms of AI can be distinguished: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) (Innovation Center Denmark, 2018). ANI, often referred to as weak AI, is programmed to execute specific tasks without the ability to self-expand functionality. AGI, also known as broad or strong AI, equals human intelligence by being able to perform multiple tasks while improving its capabilities. ASI is the strongest form, as its intelligence goes beyond human capabilities (Medium, 2017).
Artificial Super Intelligence Exceed human-level intelligence Machine intelligence exceeds human Intelligence in every task
Artificial General Intelligence Human-level intelligence Machine intelligence equals human intelligence in any area
Artificial Narrow Intelligence Human-level intelligence in one specific task Specialist in one area
Figure 2.2 Distinction between ANI, AGI and ASI. Source Own depiction
2.2 Artificial Intelligence
9
However, all currently existing AI systems are regarded as weak AI and are on the level of digital assistants (Mannino et al, 2015; Medium, 2017). Their narrow intelligence becomes apparent by giving wrong answers to questions they are not trained for (Greenwald, 2011). Experts think it is likely that AGI will develop within the first half of this century, whereas ASI might be available by the end of this century (Mannino et al, 2015). When speaking about AI, the terms Machine Learning (ML) and Deep Learning (DL) are often used interchangeably, even though they do not have the same meaning (McClelland, 2017; Marr, 2016). Both include the term learning, which is the ability to improve behaviour based on experience (Poole & Mackworth, 2017). As Figure 2.3 shows, ML and DL are the underlying technologies that enable to achieve AI, which is the ability to imitate human intelligence. ML is the underlying ability of the system to learn on its own without being explicitly programmed (McClelland, 2017; Innovation Center Denmark, 2018). DL is a subset of ML and the technology which enables to learn by using multiple layers of neural networks and vast amounts of learning data (Hulick, 2016; Innovation Center Denmark, 2018).
Artificial Intelligence Human intelligence exhibited by machines Ability to reason like humans
Machine Learning Approach to achieve AI
Ability to learn without explicit programming
Deep Learning Technique for implementing ML
Ability to learn by recognising patterns in data based on neural networks
Figure 2.3 Differentiation between AI, ML and DL. Source Own depiction based on Ehrmantraut (2018)
10
2 Literature Review
2.2.2 Application Areas of AI in the FSS AI already affects all sectors and will continue to do so as digitalisation progresses (Fraunhofer, 2018). The FSS belongs to one of the leading industries regarding AI adoption and is used in various areas, which are presented below. i. Chatbots are used to interact with customers and solve service problems before involving human staff (Keller, 2018). They can respond to common customer inquiries, for example about account management or payments (Noonan, 2018). Symons (2017) explains that combined with Robotic Process Automation, repetitive tasks and processes can be automated and executed faster, more accurate and reliable. This results in shorter waiting times, reduced costs and greater customer satisfaction. ii. Expert Assist is a question-answer based support in the back-end by simplifying and accelerating access to information for employees. This can drive revenue, satisfy customers and reduce costs (IBM, 2018). iii. Customer insights discovers trends in call transcripts, product reviews, surveys, social media and other customer interaction data (IBM, 2018). AI analyses vast amounts of unstructured data and identifies behaviour patterns to improve customer profiling and targeting with tailored recommendations (Noonan, 2018). iv. Compliance Assist supports legal and procurement professionals in analysing contracts to quickly identify specific elements like contract terms (IBM, 2018). v. Robo-Advisors are AI-powered financial consultants that monitor and evaluate portfolios and propose investment decisions based on customer data, market and stock developments. It happens with almost no human intervention except for the verification by experienced portfolio managers (Sovereign Wealth Fund Institute, 2018; Schneider, 2017). vi. With regard to Anti-Money Laundering (AML) and Fraud, AI systems identify anomalies and patterns in transactions and detect suspicious activities before they cause harm (Noonan, 2018). Traditional rule-based anti-fraud measures lead to over 90% false alarms (Chinner, 2018; Huber et al, 2018). Rather than being overwhelmed by alerts and increasing staff levels, AI can help to assess the probability of alerts and classify their risk (Mills, 2017; Culp, 2018).
2.2 Artificial Intelligence
11
2.2.3 Perceived Benefits and Challenges of AI Companies, employees, customers, governments and societies can profit from using AI but must be cautious when it falls into wrong hands or insufficient care is taken for unforeseen side effects (Mannino et al, 2015; Bughin et al, 2017). As acceptance highly depends on the perceived benefits and disadvantages, they are examined in the following.
2.2.3.1 Perceived Benefits of AI Customer Value AI provides an unprecedented customer experience and increasing customer satisfaction and retention (Gartner, 2018). According to a Genpact (2017) report, 88% of senior executives expect AI to deliver improved customer experiences by 2020. Productivity Gains With intelligent process automation, AI enhances labour productivity (Plastino & Purdy, 2018). AI takes over time-intensive, low value tasks and executes them quicker, more accurately and cheaper than humans (Mannino et al, 2015). This frees up the time of employees, who can focus on high value work (Plastino & Purdy, 2018; Forbes Technology Council, 2018). According to a study of Accenture (2018), AI adoption will increase labour productivity in Germany by 29% in 2035. Cost Savings Businesses benefit from reductions in operational costs due to automating processes (Forbes Technology Council, 2018). A recent BCG analysis found out that AI reduces 70% of costs (Kuepper et al, 2018). Revenue Increase AI helps to generate new revenue (Plastino & Purdy, 2018). According to a study of Accenture (2017), using AI in the FSS will increase the share-of-profit by 31% in 2035. Digitalisation and Innovation AI drives innovation by accelerating sales of existing products or services as well as promoting the development of innovative products, business models and offerings (Gartner, 2018). According to Mendoza (2018) and Bughin et al (2017), AI
12
2 Literature Review
usage is highest in industries that are already strong digital adopters, like Financial Services.
2.2.3.2 Perceived Challenges of AI Superintelligence When machine intelligence exceeds human intelligence, AI is likely to become more powerful than humans. It could evolve to the point that humans cannot control AI anymore, as the systems might be able to reprogram themselves or prevent shut downs. However, scientists are uncertain if and when ASI will develop (Medium, 2017). Fear of Job Loss Economists assume that the increasing usage of AI solutions could lead to a massive rise in unemployment within the next 10–20 years (Mannino et al, 2015). Nevertheless, while existing jobs are destroyed, new jobs will arise (MIT Technology Review Insights, 2016). According to Gartner (2017b), AI will eliminate 1.8 million jobs while creating 2.3 million jobs by 2020. Nonetheless, AI will lead to a significant restructuring of the labour market (Mannino et al, 2015). Lack of Trust and Transparency People feel uncomfortable when they are unable to understand how decisions are made and find it difficult to trust the outcomes (Pieters, 2011; Marr, 2017; Nogrady, 2016). By embracing AI, people give away part of their autonomy to a machine, making them feel helpless and unskilled (Mendoza, 2018). Reeves (2018) poses the question, how AI can be trusted when it encounters situations that it has not specifically been tested for. Explainability and Traceability Mistrust is closely linked to the lack of explainability (Beckett, 2017). AI systems are based on large and complex models, making it difficult to explain to humans why and how a decision was made (Chui et al, 2018a). Therefore, they are often referred to as black boxes that lack explanations (Knight, 2017). This problem is exacerbated by regulatory requirements, such as the General Data Protection Regulation (GDPR). Accordingly, companies must be able to explain the logic behind the decision-making process, if personal data is used to make automated decisions about people (Meyer, 2018). In contrast, Reeves (2018) emphasises that decisions made by humans are not always transparent either and are often based on gut feeling instead of facts.
2.3 Technology Acceptance
13
Limited Functionality and Quality At this point of time, machines are far from being able to develop algorithms required to attain intelligence similar to human cognitive capabilities (Solon, 2017). Instead, today’s applications are at the level of weak AI and are unreliable in situations outside of their training conditions (Fraunhofer, 2018). Data Security and Privacy According to Cheng et al (2006) and Pavlou et al (2007), data security and privacy concerns can be a barrier for adopting technologies that manage sensitive and personal information. Especially considering GDPR and the evolved discussions about data usage, users want to control what kind of data is processed, for which purposes and how long it is stored (Kobsa, 2002). Investment Costs According to Gadaleta (2017), AI needs considerable investments in leading-edge research, advanced algorithms, experiments and information technology (IT) infrastructure. The total project costs depend on many factors, such as project size, data amount, training efforts and skilled internal or external resources (Halsey, 2017).
2.3 Technology Acceptance 2.3.1 Technology Acceptance Model The term technology acceptance can be defined as a group of users that is willing to use IT for tasks this technology is designed to assist (Dillon). Acceptance can refer to pre- or post-implementation stage (Wahdain & Ahmad, 2014). For the purpose of this dissertation, the entire acceptance process from initial awareness to the actual usage after its implementation is considered. Various models and frameworks have been developed to explain user acceptance and its crucial factors, such as the Theory of Reasoned Action (TRA) (Fishbein & Ajzen, 1975), Theory of Planned Behavior (TPB) (Ajzen, 1991), Technology Acceptance Model (TAM) (Davis, 1989) or Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al, 2003). Those models assume that multiple factors influence the usage behaviour of technology. The TRA and TPB explain behaviour in general and build the foundation for the TAM, which focuses on the acceptance of technology innovations. The TAM has
14
2 Literature Review
been successfully applied in multiple empirical studies across a variety of application areas and can be extended to suit the research context (Abbasi et al, 2013; Boonsiritomachai & Pitchayadejanant, 2017). Therefore, this acceptance model builds the foundation for this research. The TAM aims to provide an explanation of the general factors influencing the acceptance of technologies (Pijpers et al, 2001). According to the initial model of 1989, user acceptance of any technology is determined by two factors: ‘perceived ease of use’ and ‘perceived usefulness’. The latter refers to the degree to which an individual believes that using a system would amplify their job performance, while perceived ease of use relates to the degree to which an individual presumes that using a system is without effort (Davis, 1989). Figure 2.4 outlines that both factors influence the behavioural intention towards using the technology, which in turn affects the actual usage. Despite its global use, the TAM has been criticised for its simplicity (Chang et al, 2011). It has been expanded to suit several purposes of research by merging factors from multiple theories or/and adding new factors (Wahdain & Ahmad, 2014; Godoe & Johansen, 2012).
Perceived Usefulness Behavioural Intention to Use
Actual System Use
Perceived Ease of Use
Figure 2.4 Technology Acceptance Model. Source Own depiction based on Davis (1989)
2.3.2 Other AI-Related Factors According to Zoellner et al (2008), specific factors dependant on the technology play a vital role for acceptance. Therefore, AI-related factors with regard to its acceptance are specified below. Age Kopaničová & Klepochová (2016) state that the innovativeness of consumers largely depends on their age. This is also the case for AI: The older the AI consumers, the
2.3 Technology Acceptance
15
less likely they are amongst the group of early adopters—but instead in the laggards (Kessler & Martin, 2017). Company Size Company size has an impact as well; since 25% of larger companies with at least 100.000 employees use AI solutions, compared to only 15% of companies with fewer than 1.000 employees (Ransbotham et al, 2017). Kuepper et al (2018) explain that small businesses are less likely to be early adopters, because they tend to have smaller financial budgets and fewer IT resources. This is supported by Kroker (2018), as 31% of companies with more than 500 IT employees already use AI, whereas this is only the case for 15% of companies with up to 99 IT professionals. Skillset Mendoza (2018) realises that businesses are cautious in embracing and adopting AI due to the required educational changes. In a BCG study, 93% of the surveyed corporations reported that they do not have sufficient competencies available (Kuepper et al, 2018). They seek for strong skills in programming, data science and analytics, which are in low supply due to slowly reacting universities (Kuepper et al, 2018; Economist, 2016). Corporate Culture The efficient adoption of AI requires a change in the deeply rooted corporate culture, which is a system of values, beliefs, communication practices and attitudes evolved over time and shared by employees (Ransbotham et al, 2017). It deals with how things are done and how openly and agile innovations and technologies are approached (Walker & Soule, 2017). Public Perception The public discussion about AI has increased sharply over the past years (Fast & Horvitz, 2017). There have been several successes such as IBM’s Deep Blue beating the world chess champion in 1997 (Harding & Barden, 2011), IBM’s Watson triumphing over the world champions of the quiz show Jeopardy in 2011 (Markoff, 2011) or Google’s AlphaGo winning against the game’s world champion in 2016 (BBC, 2016). Nevertheless, prominent influencers like Elon Musk or Stephen Hawking warn from AI’s danger to destroy humanity (The week, 2017). However, the European Commission started with public funds to boost research and innovation spending to stay competitive against the USA or Asia (Rankin, 2018). An AI partnership between companies, academics, researchers and civil society organizations was formed in 2016 to advance the public understanding, discuss ethics as well as influences on people and society (Partnership on AI, 2018).
16
2 Literature Review
Resource Availability Infrastructure, software technology, budget and qualified human resources are required to successfully implement and use AI. AI skilled employees are considered as most important but difficult to find in times of growing talent shortage where the supply of AI professionals does not match demand (Marr, 2018b).
2.3.3 Artificial Intelligence Acceptance Model The AIAM is created by integrating the above discussed factors from three major areas: I. Benefits (customer value, productivity gains, cost savings, revenue increase, digitalisation and innovation) and challenges (superintelligence, job security, trust and transparency, explainability and traceability, functionality and quality, data security and privacy, investment costs) II. TAM factors (perceived usefulness, perceived ease of use) III. AI-specific factors (age, company size, skillset, corporate culture, public perception, resource availability) On basis of these factors, the AIAM, visualised in Figure 2.5, has been developed. To make the model more applicable, the determinants are grouped in five areas in terms of content, resembling five perspectives: organisational, individual, financial, technological and societal. The arrows show logical connections between the factors and should not be understood as proven correlations.
Organisational Perspective Corporate Culture
Digitalisation & Innovation Level
Resource Availability
Customer Value
Company Size
Financial Perspective
Societal Perspective
Investment Costs
Data Security & Privacy
Perceived Usefulness
Cost Savings Revenue Increase Productivity Gains
Intention to Use
Perceived Ease of Use
Superintelligence Trust & Transparency Public Perception
Individual Perspective Job Security
Usage Behavior
Technological Perspective Skillset
Age
Explainability & Traceability
Functionality & Quality
Figure 2.5 Initial Artificial Intelligence Acceptance Model. Source Own depiction
2.3 Technology Acceptance
17
The developed AIAM serves as basis for the research by listing and clustering relevant AI acceptance factors that are subsequently verified and enhanced with expert interviews. Before the empirical investigation starts, the research philosophy and design are elucidated.
3
Methodology of Research
The previous chapter provides an overview about AI and potential acceptance factors and results in the development of an AIAM. Those literature findings will be enriched by qualitatively examining the perceptions of experts to verify and adapt the AIAM. Before discussing those results, this chapter gives an overview about the research methodology by outlining its philosophy, the method, interview questionnaire, sample selection and data analysis as well as research ethics.
3.1 Research Philosophy Business and management research is based on beliefs and assumptions which influence the methodological choice, research strategy, data collection technique and analysis procedure (Crotty, 1998). According to Saunders et al (2016), three types of research assumptions create a reference framework and reinforce the selection of the research philosophy: 1. Ontological assumptions describe individual perceptions of reality. 2. Epistemological assumptions deal with individual knowledge as well as its creation and dissemination based on justification and truth (Steup, 2005). 3. Axiological assumptions refer to the role of values and ethics of the researcher and the interviewees.
Electronic Supplementary Material The online version of this chapter (https:// doi.org/10.1007/978-3-658-30794-3_3) contains supplementary material, which is available to authorized users. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 D. Bengel, Organizational Acceptance of Artificial Intelligence, https://doi.org/10.1007/978-3-658-30794-3_3
19
20
3 Methodology of Research
Niglas (2010) explains that the research assumptions and philosophy depend on the degree of objectivity and subjectivity. As this research’s goal is to verify and adapt the AIAM based on the interpretations of interviewees, the evaluation of subjective opinions is important. Therefore, an interpretivist paradigm is selected when conducting interviews. Interpretivists believe that multiple meanings, interpretations and realities exist, as reality is a personal and social construct (Saunders et al, 2016; Gergen, 1994). The theory development incorporates inductive and deductive approaches. AI acceptance of financial services firms is inductively evaluated based on the views of experts and deductively tested whether the identified AI acceptance factors apply. Hence, an abductive approach is pursued as known assumptions are used to generate verifiable findings and generalisations are drawn from the interview interactions (Saunders et al, 2016) to explore the research area by modifying existing theory in a conceptual framework.
3.2 Research Design 3.2.1 Method According to the interpretivist paradigm, the most suitable research method is of qualitative nature (Lincoln et al, 2011). As interpretivists assume that reality is not embodied by one common truth but by a complex social construct of different attitudes and views, it is recommended to focus on individual perceptions of technology (Saunders et al, 2016). Over 90% of technology acceptance studies are based on quantitative methods, which aim to verify correlations between variables with statistical methods (Lee & Baskerville, 2012). This approach is not suitable for this research, as the complex relationships between humans and technology cannot be represented by numbers (Vogelsang et al, 2013; Wu, 2012). This is the reason why Balaji & Roy (2017) suggest using qualitative methods to verify known factors and explore further determinants. According to King & Horrocks (2010), interviews are one of the most commonly used qualitative techniques in organisational research. The aim is to explore interviewees’ views, experiences and understanding of the research topic. According to Alshenqeeti (2014), advantages of interviews are seen in the flexibility and familiarity leading to a high return rate. On the other hand, interviews are time-consuming, at small scale and risky for biases. Hence, interview conduction starts at an early stage, a sufficient sample of 17 interviewees is used and the risk of cognitive biases is mitigated by using transcriptions and coding.
3.2 Research Design
21
Depending on the level of formality and standardisation, interviews can be categorised as structured, semi-structured or unstructured (Saunders et al, 2016). This research’s interviews are of a semi-structured nature and are based on an interviewer-administered questionnaire (see Section 3.2.2). The usage of the questions may vary from interview to interview depending on the conversation flow, while also having the option to include additional questions in order to examine points more precisely. The interviews are undertaken with experts who have exclusive knowledge and privileged access to information due to their job responsibility (Meuser & Nagel, 1991; Pfadenhauer, 2009). The interviews last between 30 and 60 minutes and are conducted on a oneto-one basis via telephone. Saunders et al (2016) point out that compared to face-to-face interviews, it is more difficult to establish rapport and trust, which is required for reliable responses. However, for the purpose of the dissertation, this approach is reasonable, as it saves time and travel expenses and enables access to geographically distant interview partners. The interviews are conducted in German, as people feel more familiar with their mother tongue and are able to express themselves better. Concerns about the responsible and ethical use of data are addressed with a consent form, which is signed by each participant prior to the interview (see appendix A). The interviews are audio recorded. Saunders et al (2016) emphasise that recordings may negatively influence the interviewees by causing distraction which reduces reliability. On the other hand, it allows the interviewer to concentrate on questioning and listening, serves as evidence and builds the foundation for data analysis (Saunders et al, 2016). An introductory question helps to make the interviewees feel more comfortable and reduce their concern about the recording by starting the conversation with an area they are knowledgeable about. Based on the recordings, manual transcriptions of each interview are produced which serve as body for data analysis. They are adjusted to ensure a correct syntax while omitting filler words and parts that are not related to the research topic.
3.2.2 Interview Protocol As outlined in Figure 3.1, the interview questionnaire consists of six main parts linked to the topics covered in the literature review, while placing the focus on the acceptance factors of the AIAM. It starts with a simple question about the current job role and the impact of AI. Afterwards, the interviewees are asked about their understanding of AI to identify potential misconceptions. The third thematic block deals with the current adoption of AI in the organisation and explores
22
3 Methodology of Research
potential reasons for non-use. Subsequently, the benefits and challenges seen in AI are explored, as those are a major driver for acceptance. The main part five aims to discuss the acceptance factors of the AIAM from an organisational, individual, financial, technological and societal perspective. The factors that have not been mentioned by the interviewees are specifically addressed. The interview concludes with asking for additional points in order to make them reflect on the interview and reveal new acceptance factors that have not been covered yet. Conversation starter Understanding of AI Current Usage of AI Benefits and Challenges of AI Organisational Acceptance Factors of AI
Conversation end
What is your current job role and in which way is it impacted by AI? What is your understanding of AI? Is AI already being used in your company? If so, how? If not, what could be the reasons? Which benefits do you see in AI? Which challenges do you see in AI? From an organisational perspective, which factors might influence acceptance of AI? From an individual perspective, which factors might influence organisational acceptance of AI? From a financial perspective, which factors might influence organisational acceptance of AI? From a technological perspective, which factors might influence organisational acceptance of AI? From a societal perspective, which factors might influence organisational acceptance of AI? Do you see other aspects regarding AI acceptance that we have not addressed yet?
Figure 3.1 Interview Protocol. Source Own depiction
This questionnaire is distributed several days before the interview via e-mail to give the interviewees the possibility to prepare themselves. Appendix B shows a more detailed questionnaire, which contains the questions related to each acceptance factor as well as introductory remarks about data privacy and concluding words. Neither the developed AIAM, nor the detailed questionnaire is sent, as seeing the acceptance factors in advance might limit their thoughts, intimidate them due to its complexity and influence their response behaviour. Interview pretesting is regarded as effective technique for improving the validity in qualitative data collection and interpretation (Brown et al, 2008). Hence, the
3.2 Research Design
23
above outlined questionnaire has been pretested within a role play with an IBM colleague, who is not part of the actual research but fits to the defined sample (see Section 3.2.3) to identify potential misunderstandings or biases.
3.2.3 Sample Selection Hennink et al (2011) outline that qualitative research is associated with smaller sample sizes than quantitative investigations. However, the more interviews are conducted, the more reliable are the results. The first selection criterion is that the interviewees must either work in the German FSS or have knowledge about the industry. Secondly, they need to be experts in the field of innovation, digitalisation and AI by means of their job responsibility. In addition, research-practical aspects such as accessibility, time availability and willingness to participate influence the selection. Access to the interview partners is gained through the researcher’s personal contacts, the network of colleagues and social media research via Xing and LinkedIn. As a result of these considerations, a high number of 17 experts are interviewed, whereof seven are working in the banking industry, six in insurance companies, three in the IT sector and one is industry independent. The interviewees work at the following financial services companies: – – – – – – – – – – – – – –
AXA-Konzern AG Deutsche Bank AG Gothaer Versicherungsverein auf Gegenseitigkeit Hallesche Krankenversicherung auf Gegenseitigkeit Hanseatic Bank GmbH & Co. KG IBM Deutschland GmbH Inter Versicherungsgruppe VVaG M.M.Warburg & CO KGaA Rothschild & Co R+V Versicherung AG Talanx AG Vereinigte Volksbank eG Versicherungskammer Bayern Volkswagen Financial Services AG
24
3 Methodology of Research
The job roles of the participants include: – – – – – – –
Chief Digital Officer Consultant Digital Transformation Data Scientist Head of Digital Investments Innovation Engineer Innovation Manager Vice President Digital Business
An anonymised overview about each interviewee is given in appendix C.
3.2.4 Data Analysis Qualitative data is characterised by large volumes and complex nature, which emphasises the importance of choosing the right analysis method (Saunders et al, 2016). More than 14 hours of interview recordings serve as basis for 199 pages, more specifically 66.324 words of interview transcriptions. They serve as the basis for data analysis and are coded to reduce cognitive biases and to stay objective. The chosen method for data analysis is template analysis as part of thematic analysis. According to Saunders et al (2016), the purpose is to identify themes and patterns related to the research question by coding the interview transcriptions prior to interpretation. An initial coding template has been developed based on literature research (see appendix D), which is tested and modified as new data reveals further codes. This leads to a final coding template after all interviews are conducted, which will be outlined in chapter 4. A sample of one coded transcription is attached in appendix E. Saunders et al (2016) see disadvantages in the high expenditure of time and in the possible removal of text fragments from its context. A software-based approach is taken to accelerate the analysis process, structure the vast amount of information and evaluate it transparently and context-sensitively. All interviews that are referred to in Chapter 4 are conducted in 2018. The quotations are the author’s translations from the original German transcripts to English. Unless otherwise specified, the statements are supported by several interviewees.
3.3 Research Ethics
25
3.3 Research Ethics From a content point of view, AI is an ethically sensitive topic which requires caution, for example when talking about the potential replacement of human labour or data privacy concerns. From a research point of view, a great deal of trust is placed in the researcher’s integrity, as it is dealt with human participants and personal data. Saunders et al (2016) highlight that ethical issues should be considered throughout the entire research process of data collection, analysis and discussion. Alshenqeeti (2014) emphasises the importance of protecting participants’ rights and avoiding causing harm. Furthermore, researchers must ensure they treat the collected data as confidential and anonymous, and only store them as long as necessary. In addition, interviewees must be informed about their right to withdraw at any stage (Alshenqeeti, 2014). These aspects are covered in the aforementioned consent form. To conclude, this chapter elucidates the research methodology, which is based on an interpretivist paradigm and abductive approach. Semi-structured expert interviews are carried out bearing in mind research ethics to inductively evaluate the views of experts and deductively verify the identified AI acceptance factors. This provides the basis for the presentation of interview findings in the next chapter.
4
Research Findings
On basis of the chosen research methodology, this chapter presents and discusses the outcomes of the empirical investigation while comparing them with literature findings of Chapter 2. The presentation of results follows the interview structure with the developed five themes. Within each theme, additional codes have been identified by the interviewees, which are elaborated at the end of every theme chapter. Ultimately, 29 codes are allocated to the five themes.
4.1 Analysis and Discussion of Interview Findings 4.1.1 Understanding of AI and Application Areas The comprehension of AI corresponds to the points raised in Section 2.2.1. Difficulties are seen in understanding the difference between statistical data analytics and AI. According to Reavie (2018), AI is an extension of predictive analytics by being able to make assumptions, test and learn autonomously. Whereas some of the interviewed companies are still in the planning stage, the majority adopts AI in various use cases. The most prominent examples are chatbots, semantic text analysis and AML. In semantic text analysis and routing, AI assists with screening payment transactions, complaints, invoices or mails to identify concerns and emotions and to route the documents to the person in Electronic Supplementary Material The online version of this chapter (https:// doi.org/10.1007/978-3-658-30794-3_4) contains supplementary material, which is available to authorized users. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 D. Bengel, Organizational Acceptance of Artificial Intelligence, https://doi.org/10.1007/978-3-658-30794-3_4
27
28
4 Research Findings
charge. In the field of damage processing and prediction, AI helps to estimate the amount of damage by analysing images and considering external data to improve risk assessments or to determine dynamic pricing. In marketing, AI automates trigger-based customer communication (interviewee 12). Interviewee 7 notes that AI is utilised in the field of aptitude diagnostics to identify development measures for employees.
4.1.2 Perceived Benefits and Drawbacks According to the interviewees, AI accelerates, digitalises and automatises processes and makes them more customer-friendly. Overall the speed can be increased, as the preliminary work of collecting and analysing data is done by AI and the employees can respond faster to customer inquiries. The core strength is seen in processing large amounts of data, which cannot be analysed solely by humans. Hereby, new insights are revealed that humans have never thought about. Apart from that, AI systems make data-driven and rational decisions and avoid human errors. With the same data basis, AI always gives the same results regardless of day or time, making them more reliant and accurate than humans. Furthermore, AI systems are 24/7 available. Interviewee 6 mentions that AI-friendly companies tend to be more attractive for job applicants, who enjoy trying out new things and being at the pulse of time. Ideally, AI engagements lead to amended business models and a competitive advantage. Nonetheless, a criticism is that the technology is unestablished in the market with few experiences, leading to unexpected problems with rising implementation time and cost. Companies do not want to be early adopters which help laggards to learn from mistakes. They rather wait until the technology and experiences are more mature. While AI systems promise to be unbiased due to their logical data-driven approach without having irrational behaviour, they become biased, as they are trained and programmed by humans who have own values, attitudes and moral concepts. Moreover, to identify the right use case, a clear picture of what to be improved and how AI can help is needed. Interviewee 14 raised the point that AI projects usually cannot be started on the green field as stand-alone process but require a digital service-enabled process landscape. Lastly, interviewee 2 and 10 perceive the acceptance of AI as barrier. Hence, it will be elaborated which factors play a role for AI acceptance from an organisational, individual, financial, technological and societal perspective.
4.1 Analysis and Discussion of Interview Findings
29
4.1.3 Organisational Factors Corporate Culture and Communication Literature and interviewees consider embracing new technologies as part of the company culture as an essential factor to drive organisational AI acceptance. It is noted by interviewee 11 that the company culture is difficult and time-consuming to change. Interviewee 7 assumes that influencing culture and change in smaller organisations is less challenging due to less employees and its horizontal structure. The majority of interviewees recommend a transparent and early communication about AI intentions associated with the implications and consequences to ensure employees’ engagement and involvement. Digitalisation and Innovation Level All interviewees agree that the more digitalised a company, the easier and quicker it is to integrate AI. The reason is seen in the fact that digitalised companies have their data already available in digital form, which is a precondition for using AI. Interviewee 15 and 17 explain that digitalisation is a step-by-step-process, making it difficult to start with AI. According to interviewee 6, the insurance industry is more digitalised than the banking sector, as the latter still has a large proportion of contracts in paper form. Resource Availability All interview partners confirm that a sufficient resource base is key for engaging with AI. Especially qualified human resources are difficult to find. According to interviewee 1 and 7, technological resources are of secondary importance. However, if AI is something the company wants to pursue, it must be anchored in its corporate strategy. Consequently, resources are a question of prioritisation. It is expected by several interviewees that larger corporations can allocate resources more easily. Customer Value Literature and all interviewees see customer value and experience as driving force. Accordingly, customers profit from new interaction channels, 24/7 availability, higher convenience and better customer service. However, all interviewees agree that AI will only be accepted, if it is easy to use and delivers good outcomes. This is not always the case for chatbots, which results in anger and frustration.
30
4 Research Findings
Company Size The opinions about if and how company size plays a role for AI acceptance differ between the interviewees. The majority corresponds to literature that larger companies tend to have more financial strengths, human resources, available data and potential for automation than smaller firms. At the same time, some argue that larger enterprises have complex hierarchical structures, longer decision-making processes and legacy IT infrastructure. The benefits of smaller companies are seen in a different company culture and mindset as well as shorter ways of communication, leading to higher agility, innovativeness and adventurousness. Nonetheless, three interviewees state that company size plays no role. Strategy and Leadership A new factor mentioned by many interviewees is the impact of the corporate strategy on AI acceptance, pursued by the leadership. Interviewee 9 suggests to drive AI strategies top-down but also from a bottom-up perspective to involve employees. Interviewee 3 states that an open-minded and IT savvy leadership promotes the progression of AI technologies. Interviewee 5 and 6 establish a connection to the age of leaders, who are often no digital natives and less openminded towards new developments. Several interviewees see a linkage with the company size by assuming a more conservative and traditional leadership in smaller companies, making AI introduction more difficult. Experiment and Explore Another new acceptance factor highlighted by almost all interviewees is the possibility to experiment and explore the technology in so-called proof of concepts within a short time frame, with modest investments, risks and workload. On the premises “fail fast and learn quickly” (interviewee 1) and “trial and error” (interviewee 14), companies should understand AI with its possibilities and constraints, gain experience, familiarise with easy usage, build trust and produce visible results. Such projects can either be undertaken by the company itself or by external help. The latter often leads to cooperative partnerships that have been positively highlighted by interviewee 2 and 12.
4.1.4 Individual Factors Job Security Similar to literature, interviewees regard the fear of job loss as the most important challenge for individual acceptance. Interviewee 15 remarks that lots of simple
4.1 Analysis and Discussion of Interview Findings
31
tasks can be overtaken by AI. On the other hand, it is emphasised several times that the objective is not to replace employees but to give them more intelligence and a second opinion. To eliminate existential fears, it is suggested to highlight the narrow intelligence of AI resulting in its inability to replace entire job roles. Interviewee 5 explains: “The challenge is to accomplish that AI and humans work hand in hand together. It is not about having one or the other.” Apart from that, interviewee 15 mentions that younger employees have a broader range of skills, because they change jobs more frequently, in contrast to older employees who are experts in a specific job they have been doing for years, making them more vulnerable to be replaced by AI. Skillset In addition to literature, interviewees distinguish between professionals who develop and train AI, and employees who are end users. Interviewee 9 notes that professionals can acquire knowledge about the freely-available AI methods and use a standard notebook to process moderate data volumes. Interviewees explain that employees accept AI when it is neither too complex nor too technical, in sum easy to use and convenient. Age The interviewees confirm the tendency stated in the literature but do not consider it as relevant. They rather emphasise exceptions in both directions and explain the growing familiarisation of older people coming from the increasing ease of use and ones’ flexibility, agility and openness to try out new things. Job Enrichment A new aspect mentioned by several interviewees is that AI augments and supports employees. They are relieved from simple, monotonous, repetitive and tiresome routine tasks and instead empowered to do conceptual, value-creating and customer-focused work. As a result, humans are promoted to the role of an expert (interviewee 1 and 2). Personality and Mindset An overarching factor introduced by multiple interviewees is personality and mindset. Interviewee 16 states that “there are people who see risks in change and there are people who see opportunities in change.” The interviewees describe employees either as agile, innovative, future-oriented and curious to try out technological advancements, or as striving for stability in their work practices and rejecting new technologies like AI. Consequently, the willingness to learn and
32
4 Research Findings
openness to change are crucial personality traits that favour AI usage (interviewee 7 and 12). Furthermore, various interviewees mention that points of contact in the private life, for example by using Alexa or Siri, help to make people familiar with the technology, build trust and reduce anxiety.
4.1.5 Financial Factors As with any other technology, interviewees point out that building a business case includes the expected benefits or returns and costs or investments of a project. Interviewee 12 notes that the financial perspective is a hygiene factor and side effect. If AI is introduced as a cost and not as a business case with an added value, it will not drive employee or customer acceptance. Interviewee 6 explains that companies often start with AI where the business case is so obvious and positive that a certain investment is no problem, which is why chatbots are so attractive. Interviewee 6 and 9 see AI investments as a chance to generate a quick return on investment—ideally in less than a year. Investment Costs Some interviewees see investment costs as showstopper, whereas others clarify that the costs are not as high as commonly thought. Interviewee 5 and 14 explain that the better the quality of the AI system, the higher the costs. Since data and infrastructure are often available, skilled human resources and technology are perceived as driving factors. It is noted that depending on the provider, the technology only costs a fraction when being used at small scale. Interviewee 8 and 17 declare that larger corporations have more budget available than smaller ones. Cost Savings Literature and interviewees coincide that cost savings are a major factor. Interviewee 11 refers to various use cases, in which costs could be reduced at double-digits. A link to the company size is made as according to interviewee 3, larger corporations have higher cost saving potential. Revenue Increase Interviewees basically agree with literature findings but interviewee 6 and 17 consider cost reductions as more important than revenue increase. They note that depending on the application area like chatbots or investment advisory, products can be marketed and sold via new channels, what enables cross- and up-selling potential. Due to increased productivity, interviewee 10 elaborates that sales
4.1 Analysis and Discussion of Interview Findings
33
r epresentatives can use the released time to make additional revenue. Interviewee 11 refers to various companies that achieved revenue growth of almost 20%. Productivity and Efficiency Gains The interview partners also associate increased efficiency with productivity gains. Interviewee 3 elucidates that employees can do the same work in less time. They can either use the additional time to get more work done or need only a fraction of resources for the same workload. Specific Key Performance Indicators (KPIs) Interviewee 9 and 12 do not recommend to exclusively focus on costs or revenue, because various KPIs can measure the success of AI. Besides increased customer satisfaction and loyalty, the increasing number of processed invoices or the decreasing number of complaints is named. According to interviewee 15, it is problematic that the results are not always measurable or not within a reasonable time.
4.1.6 Technological Factors Explainability and Traceability Interview partners fully confirm literature findings, as employees must understand the technology and the way the algorithms analyse the data and reach an outcome. Interviewee 9 summarises: “We know they work, but we do not know why they work. How can I trust AI processes, if I can no longer mathematically describe them?”. On the other hand, interviewee 4 and 13 take the standpoint that decisions of AI systems are more transparent and fact-based than those of humans. Functionality and Quality Interviewees agree with literature on the limited functionalities of AI and highlight that it is trusted and accepted when working well and giving high quality output. Ideally, it is so good that employees or customers do not even notice when they are interacting with AI. Especially the functionalities of chatbots are considered as rudimentary and script-based, resulting in the non-understanding of questions and in the transfer to human agents. Interviewee 15 informs that the limited capabilities can even lead to the situation that employees do not think that the system performs better than themselves. In addition, it is criticised that AI currently lacks emotions and humanity, as clients often prefer the opinion and feelings of their banking advisor towards an investment over optimised data-driven system
34
4 Research Findings
responses. In contrast from a provider perspective of interviewee 6, the technology is regarded as mature enough to build various use cases with little risk. However, to achieve satisfying results, interviewee 8 and 9 recommend setting a narrow focus and having realistic expectations regarding the systems abilities. Data Availability and Quality Additionally, it was brought up by numerous interviewees that AI systems are only as good as the data they learn from, as there is no original intelligence. Interviewee 1 postulates that “data is the new oil”. Apart from data quantity, its quality with regard to completeness, origin and impartiality is of high importance. Interviewee 10 and 11 mention the challenge of data being organised vertically in silos, requiring effort and time to consolidate and structure it to make it accessible for AI. Training Effort Several interviewees mention that training efforts are higher than expected to at least reach the quality required for acceptance. As humans train the system, it comes into play that “AI is only as intelligent as the person who programs it. The person who programs it, is only as good as his understanding of the problem” (interviewee 9). An additional challenge perceived by interviewee 3 and 6 is that those employees whose jobs are at risk, should train the system with their knowledge. On the other hand, interviewee 7 comments that employees perceive their work as valuable and build an emotional bond to the system, when they are involved in the training and learning process. As learning from feedback is a key feature, it is recommended by interviewee 14 and 16 to early involve the customer and to release prototypes fast, while ensuring high quality that the company’s reputation is not damaged. Language It was noted by interviewee 1 that most providers do not offer AI services in German. Especially traditional German small and medium-sized companies use German dialects, which poses further requirements on the natural language processing capabilities of AI.
4.1.7 Societal Factors Data Security and Privacy Interviewees agree to literature and view data security and privacy as concern. Interviewee 11 states: “On the one hand we need liberality, on the other hand we
4.1 Analysis and Discussion of Interview Findings
35
need necessary protection. But the stronger the protection, the less convenient for the customer.” Many interviewees realise that AI services require the use of cloud services. Due to the GDPR regulations, data may not leave the European Union and consequently, servers need to be physically located in Europe. Interviewee 15 shares the positive view that those regulations build the foundation for AI adoption, as they require having a good data basis and informing customers transparently about their data usage. Besides, interviewee 4 and 7 state that regulations between countries differ, what influences AI adoption. Superintelligence Literature and interviewees agree on being decades away from the development of a superintelligence and do not perceive it as severe risk. However, several interviewees point at the risk of total surveillance as well as the technology being used by state institutions or for military purposes. At the end, “technology is a tool and it is up to us to decide if we use it for good or bad” (interviewee 2). When being used for wrong purposes, the interviewees demand social contempt. To mitigate the potential danger, interviewee 13 proposes to consider these scenarios when programming AI. Interviewee 4 suggests an off-button, which cannot be outwitted by AI. Trust and Transparency Interviewees agree with literature that trust plays a major role for AI acceptance and is regarded by interviewee 7 as prerequisite for actions. It is an overarching factor that touches several aforementioned determinants, as it “requires knowledge, understanding, transparency, credibility and quality” (interviewee 8). Numerous interviewees elucidate that trust is established by making transparent how the technology is working, how decisions are made, how data is processed and how AI is used within the organisation. According to interviewee 12, it is often not the technology that is doubted, but the intention what will be done with it, which requires its responsible and ethical use. A question raised by interviewee 9 is whom to trust, if decisions between humans and AI differ. One must be aware that arguments are not accepted, when it comes to trust (interviewee 15). This is why visible and measurable success cannot dispel underlying mistrust (interviewee 12). Public Perception The interviewees reinforce literature findings and note that a lot of educational work must be done at a public level. According to interviewee 5 and 15, the perception depends on the application area: Whereas AI is rather perceived nega-
36
4 Research Findings
tive when replacing human interactions, it is perceived positive when supporting internal processes. Interviewee 8 and 12 assume that many organisations make AI efforts without the knowledge of the public, because of the unpredictable reaction of society. Furthermore, interviewee 14 criticises that the press and providers do not always realistically outline what AI systems are able to perform. While acceptance is positively strengthened when AI contributes to improve societal problems, such as climate protection and energy saving, it is negatively reinforced by job cuts or military use (interviewee 8 and 15). Interviewee 4 comments: “The scepticism is not necessary but the euphoria is not worth it either.” Interviewee 2 and 11 encourage everybody involved in AI to make their opinions heard and share experiences. Understanding and Education All interviewees emphasise that employees often have a limited understanding of AI, which causes anxiety and resistance. They do not know what benefits and limitations as well as implications it has for them. Therefore, a credible picture needs to be communicated to establish trust and acceptance. Furthermore, it is remarked by interviewee 11 that the education system must be reformed to correspond to labour developments and market demands. Interviewee 2 proposes to introduce short course of studies in niche areas with micro-degrees. Employment and Income Distribution The already discussed fear of job loss does not only impact individuals but the society, which has the responsibility to provide jobs (interviewee 14). Interviewee 7 and 8 associate the risk with economic income distribution and unconditional basic income. Several interviewees refer to the industrial revolution, where it was managed to create new jobs and maintain the employment rate. Interviewee 12 raises the question whether it is different this time. Interviewee 14 underlines that this problem is not exclusively caused by AI but by digitalisation through the replacement of manual processes. Country and Culture Interviewee 2, 3 and 17 see differences in the willingness to use AI across borders. Interviewee 5 states that companies in emerging countries like China tend to be enthusiastic about AI, whereas industrialised nations like Germany have a more traditional view. According to interviewee 2, the German mindset is characterised by being afraid, overcautious and risk-averse towards new, making them fall behind other countries. Interviewee 2 finds the reason in the affluent society with a high level of prosperity, leading to a nothing-to-lose mentality and less pressure to come up with innovative ideas.
4.2 Adaption of the Artificial Intelligence Acceptance Model
37
Discrimination and Manipulation A big risk is seen in the self-learning nature of AI, as input data might be manipulated or customer use could lead to wrong results. As interviewee 8 states: “How can we assure that learning is always going in the right direction?” This risk must be treated seriously to not being publicly perceived as discriminating, which would harm the company’s reputation. It is suggested by interviewee 1 and 2 that businesses create a code of ethics, while society introduces mandatory ethical regulations. Expectation Setting Interviewees additionally point out that high expectations are placed on AI. Interviewee 14 elucidates that banks often measure AI systems against the capabilities of human banking advisors. However, it is perceived by interviewee 6 that expectations already came closer to reality in comparison to two years ago. Interviewee 4 and 16 suggest an open and transparent communication to clarify expectations. At the end, it is all about the perceived benefits by the employees, which in its basics correspond to the two dimensions of Davis—usefulness and perceived ease of use.
4.2 Adaption of the Artificial Intelligence Acceptance Model The acceptance factors of the developed AIAM have been discussed and verified with expert interviews. Hereby, existing factors are either extended, excluded or new determinants are added. At the organisational perspective, ‘Strategy & Leadership’ and ‘Experiment & Explore’ are enclosed and ‘Corporate Culture’ is enhanced by ‘Communication’. ‘Job Enrichment’ and ‘Personality & Mindset’ are included at the individual perspective while removing ‘Age’. Further, at the financial perspective, ‘Efficiency Gains’ are integrated in ‘Productivity Gains’ and the factor ‘Specific KPIs’ is added. ‘Data Availability & Quality’, ‘Training Effort’ and ‘Language’ is inserted at the technological perspective. Lastly, at the societal perspective, ‘Understanding & Education’, ‘Employment & Income Distribution’, ‘Discrimination & Manipulation’, ‘Country & Culture’ and ‘Expectation Setting’ are added while removing ‘Superintelligence’. As a result, the final AIAM emerged and is shown in Figure 4.1.
38
4 Research Findings
Organisational Perspective Corporate Culture & Communication
Digitalisation & Innovation Level
Resource Availability
Customer Value
Company Size
Strategy & Leadership
Societal Perspective
Individual Perspective Job Security
Data Security & Privacy
Perceived Usefulness
Skillset Job Enrichment
Intention to Use
Perceived Ease of Use
Personality & Mindset
Trust & Transparency Usage Behavior
Cost Savings
Employment & Income Distribution Revenue Increase
Productivity & Efficiency Gains
Specific KPIs
Functionality & Quality
Country & Culture Discrimination & Manipulation
Technological Perspective Explainability & Traceability
Public Perception Understanding & Education
Financial Perspective Investment Costs
Experiment & Explore
Data Availability & Quality
Training Effort
Language
Expectation Setting
Figure 4.1 Final Artificial Intelligence Acceptance Model. Source Own depiction
When analysing the interviews, a total of 76 codes are generated, whereof 29 are referring to the AIAM. They are supported by 884 excerpts and 1820 code applications. Appendix F shows the final coding template of the AIAM in more detail. The number of excerpts per interview partner is outlined in appendix G. The high number of interviewee references is regarded as a guide and aids to eliminate cognitive bias. It is a further indication of the importance of each acceptance factor, but at the end it is no purely number-based decision. After discussing the findings of the interviews and revision of the developed AIAM, the final chapter makes concluding remarks about the research question and outlines limitations and opportunities for further research.
5
Conclusion
5.1 Reflection and Outlook Similar to the Internet in the early 1990’s, AI marks a paradigm shift for the mankind, which can no longer be ignored. AI has an estimated annual value between $3.5T and $5.8T and is predicted to flourish more than twice as quickly as other strongly growing IT trends: this is about a third the size of big data or a tenth of cloud computing (Bloomberg Intelligence, 2017; Chui et al, 2018b). To fulfil these great promises, relevant stakeholders must be prepared—individually, organisationally, financially, technologically and socially—to be willing to accept this technology (Schoeman, 2018). As a result, the primary objective of this dissertation is to gain a deeper understanding of the determinants influencing acceptance of AI in the German FSS from an organisational point of view. On basis of literature review and the prominent TAM, a conceptual framework dedicated to AI acceptance is devised and revised after analysing the insights of experts. The research question of this dissertation is: What factors influence the organisational acceptance of artificial intelligence in the German financial services sector? The investigation shows that the acceptance of AI is influenced by a large number of factors. They are interrelated and mutually influence each other. In essence, acceptance is driven by the ease of use and perceived usefulness, surrounded by a group of five broad perspectives. In sum, 29 acceptance factors are identified, whereby the number is not definite and can be extended or aggregated depending on the level of detail and purpose. It was found that understanding employees’ concerns and perceptions regarding AI is crucial for their acceptance. This depends highly on the personality and mindset of each employee, who should ideally embrace innovations, be willing © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 D. Bengel, Organizational Acceptance of Artificial Intelligence, https://doi.org/10.1007/978-3-658-30794-3_5
39
40
5 Conclusion
to change and open towards new technologies. With transparent and trustworthy communication about the impact of AI, anxieties, for example about job loss, can be reduced while trust and perceived benefits can be increased. Thereby, how good the system works and how decisions are made must be explained to manage expectations. Besides, employees should be given the possibility to experiment and familiarise with the technology. The company must provide the foundation with an aligned corporate and digital strategy, culture, leadership and necessary resources and education. Financial considerations come into play in a secondary step by primarily profiting from cost savings and revenue increase. A profound understanding and knowledge must be imparted, not only at an individual but also at a societal level to ensure an unbiased media coverage. Qualitative research approaches towards technology acceptance are underrepresented in literature. Especially in the context of AI and the FSS, no academic research on acceptance factors could be found. Consequently, the presented acceptance factors enhance the extant body of research about technology acceptance models and its application for AI. Besides, the findings of the conceptual framework can help banks and insurance companies to gain an understanding of what influences employees’ AI acceptance while revealing factors that impede it. The knowledge helps to measure organisational readiness to implement AI and determine respective actions to achieve a higher AI adoption.
5.2 Limitations and Opportunities for Future Research There are several limitations, which shape future research. First, the dissertation is limited in terms of time and scope. It is a short piece of work that investigates a broad research question. This does not allow to go into great detail of each acceptance factor. It is regarded as first study that provides an overview about potential acceptance factors in the given context. Second, research is undertaken in a specific organisational and country setting. Consequently, research results are related to the German FSS and should be carefully generalised to other countries and industries with a different cultural, societal, ethical, organisational and legal environment. Third, a sample of 17 experts is interviewed to verify the developed AIAM. When keeping in mind that qualitative investigations use smaller samples than quantitative research, the data basis is regarded as sufficient to answer the research question. Forth, the study offers a snapshot of a single moment in time. It gives a picture of the attitudes and opinions of the experts about AI acceptance
5.2 Limitations and Opportunities for Future Research
41
factors and does not show changes over time. Fifth, the developed AIAM does not consider rankings or priorities of the factors. The above-mentioned limitations can be addressed in future research to work out if the findings are generalisable and replicable in other settings. Since AI is a rapidly evolving field, it is difficult to draw conclusions that last over time. Therefore, research needs to be repeated regularly in the same context within a longitudinal study. Besides increasing validity, it helps to detect developments and differences to study cause and effect. In addition, the qualitative model could be verified and tested in a subsequent quantitative research. Thereby, the identified variables could be ranked and their correlations, currently illustrated in form of the arrows, be proven. A third possibility is to initiate a broader piece of qualitative research across a range of sectors. Apart from that, practical and strategic recommendations for action can be developed to demonstrate how organisations can accelerate AI acceptance.
List of References
Abbasi, M.S. et al. 2013. Theories and Models of Technology Acceptance Behaviour: A Critical Review of Literature. Sindh University Research Journal. 45(1), pp. 163–170. Accenture. 2017. Accenture Report: Artificial Intelligence Has Potential to Increase Corporate Profitability in 16 Industries by an Average of 38 Percent by 2035. [Press release]. [Accessed 4 June 2018]. Available from: https://newsroom.accenture.com/ news/accenture-report-artificial-intelligence-has-potential-to-increase-corporate-profitability-in-16-industries-by-an-average-of-38-percent-by-2035.htm Accenture. 2018. Artificial intelligence is the future of growth. [Online]. [Accessed 4 June 2018]. Available from: https://www.accenture.com/au-en/insight-artificial-intelligence-future-growth Adams, R.L. 2017. 10 Powerful Examples Of Artificial Intelligence In Use Today. Forbes. [Online]. 10 January. [Accessed 12 August 2018]. Available from: https://www. forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of-artificial-intelligence-in-use-today/#da60a10420de Ajzen, I. 1991. The Theory of Planned Behavior. Organizational Behavior and Human Decision Processes. 50(2), pp. 179–211. Alshenqeeti, H. 2014. Interviewing as a Data Collection Method: A Critical Review. English Linguistics Research. 3(1), pp. 39–45. Balaji, M.S. and Roy, S.K. 2017. Value co-creation with Internet of things technology in the retail industry. Journal of Marketing Management. 33(1–2), pp. 7–31. Bashforth, G. 2018. Is artificial intelligence the future of financial services? [Online]. [Accessed April 30 2018]. Available from: http://www.caymanfinancialreview. com/2018/01/29/is-artificial-intelligence-the-future-of-financial-services/ Batra, G. et al. 2018. Artificial intelligence: The time to act is now. [Online]. [Accessed 23 June 2018]. Available from: https://www.mckinsey.com/industries/advanced-electronics/our-insights/artificial-intelligence-the-time-to-act-is-now BBC. 2016. Artificial intelligence: Go master Lee Se-dol wins against AlphaGo program. BBC. [Online]. 13 March. [Accessed 13 August 2018]. Available from: https://www. bbc.co.uk/news/technology-35797102
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 D. Bengel, Organizational Acceptance of Artificial Intelligence, https://doi.org/10.1007/978-3-658-30794-3
43
44
List of References
Beckett, V. 2017. How treasurers can use AI to simplify invoicing. [Online]. [Accessed 10 August 2018]. Available from: https://www.theglobaltreasurer.com/2017/09/12/howtreasurers-can-use-ai-to-simplify-invoicing/ Bench-Capon, T.J.M. and Dunne, P.E. 2007. Argumentation in artificial intelligence. Artificial Intelligence. 171(10–15), pp. 619–641. Berry, L.L. et al. 2006. Creating New Markets Through Service Innovation. MIT Sloan Management Review. 47(2), pp. 56–63. Bloomberg Intelligence. 2017. A new era: Artificial intelligence is now the biggest tech disrupter. Bloomberg. [Online]. 6 October. [Accessed 26 August 2018]. Available from: https://www.bloomberg.com/professional/blog/new-era-artificial-intelligence-now-biggest-tech-disrupter/ Boonsiritomachai, W. and Pitchayadejanant, K. 2017. Determinants affecting mobile banking adoption by generation Y based on the Unified Theory of Acceptance and Use of Technology Model modified by the Technology Acceptance Model concept. Kasetsart Journal of Social Sciences. 38(1), pp. 1–10. Brown, K.M. et al. 2008. Using Pretesting to Ensure Your Messages And Materials Are on Strategy. Health Promotion Practice. 9(2), pp. 116–122. Bughin, J. et al. 2017. Artificial intelligence – The next digital frontier? New York: McKinsey Global Institute. Chang, J.-L. et al. 2011. Factors influencing technology acceptance decision. African Journal of Business Management. 5(7), pp. 2901–2909. Cheng, T.C.E. et al. 2006. Adoption of internet banking: an empirical study in Hong Kong. Decision Support Systems. 42(3), pp. 1558–1572. Chin, L.P. 2015. Consumers Intention to Use a Single Platform E-Payment System: A Study Among Malaysian Internet and Mobile Banking Users. Journal of Internet Banking and Commerce. 20(1), pp. 1–13. Chinner, V. 2018. Artificial Intelligence And The Future Of Financial Fraud Detection. Forbes. [Online]. 4 June. [Accessed 10 August 2018]. Available from: https://www. forbes.com/sites/theyec/2018/06/04/artificial-intelligence-and-the-future-of-financial-fraud-detection/#2da1d7b5127a Chui, M. et al. 2018a. What AI can and can’t do (yet) for your business. [Online]. [Accessed 20 June 2018]. Available from: https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/what-ai-can-and-cant-do-yet-for-your-business Chui, M. et al. 2018b. Notes from the AI Frontier: Insights from hundreds of use cases. New York: McKinsey Global Institute. Crotty, M. 1998. The Foundations of Social Research: Meaning and Perspective in the Research Process. London: Sage. Culp, S. 2018. Banks Need New Approaches In Complying With Financial Crimes Regulations. Forbes. [Online]. 5 March. [Accessed 13 August 2018]. Available from: https:// www.forbes.com/sites/steveculp/2018/03/05/banks-need-new-approaches-in-complying-with-financial-crimes-regulations/#25d4bc594147 Dahlman, C. 2007. Technology, globalization, and international competitiveness: Challenges for developing countries. In: United Nations. ed. Industrial Development for the 21st Century: sustainable development perspectives. New York: United Nations Publications, pp. 29–83.
List of References
45
Das, P. et al. 2018. Barriers to innovation within large financial services firms: An in-depth study into disruptive and radical innovation projects at a bank. European Journal of Innovation Management. 21(1), pp. 96–112. Davis, F.D. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly. 13(3), pp. 319–340. Dillon, A. 2001. User acceptance of information technology. In: Karwowski, W. ed. Encyclopedia of Human Factors and Ergonomics. London: Taylor & Francis. Economist. 2016. Million-dollar babies. The Economist. [Online]. 2 April. [Accessed 19 June 2018]. Available from: https://www.economist.com/business/2016/04/02/million-dollar-babies Ehrmantraut, M. 2018. AI ist der Key Differentiator für Geschäfts-Innovationen. Unpublished. European Political Strategy Centre. 2018. The Age of Artificial Intelligence: Towards a European Strategy for Human-Centric Machines, Luxemburg: Publications Office of the European Union. Fast, E. and Horvitz, E. 2017. Long-Term Trends in the Public Perception of Artificial Intelligence. In: Singh, S. and Markovitch, S. eds. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 4–9 February 2017, San Francisco. Palo Alto: AAAI Press, pp. 963–969. Fishbein, M. and Ajzen, I. 1975. Belief, attitude, intention, and behavior: An introduction to theory and research. Reading: Addison-Wesley. Forbes Technology Council. 2018. 14 Ways AI Will Benefit Or Harm Society. Forbes. [Online]. 1 March. [Accessed 13 May 2018]. Available from: https://www.forbes. com/sites/forbestechcouncil/2018/03/01/14-ways-ai-will-benefit-or-harm-society/#53e6664ef09b Fraunhofer. 2018. Maschinelles Lernen: Eine Analyse zu Kompetenzen, Forschung und Anwendung. Munich: Fraunhofer-Verlag. Gadaleta, F. 2017. How to Fail with Artificial Intelligence. 6 April. Medium. [Online]. [Accessed 13 August 2018]. Available from: https://medium.com/money-talks-the-official-abe-blog/how-to-fail-with-artificial-intelligence-b3c4b1966bb3 Gartner. 2017a. Gartner Identifies the Top 10 Strategic Technology Trends for 2018. [Press release]. [Accessed 13 May 2018]. Available from: https://www.gartner.com/newsroom/ id/3812063 Gartner. 2017b. Gartner Says By 2020, Artificial Intelligence Will Create More Jobs Than It Eliminates. [Press release]. [Accessed 29 August 2018]. Available from: https://www. gartner.com/newsroom/id/3837763 Gartner. 2018. Gartner Says Global Artificial Intelligence Business Value to Reach $1.2 Trillion in 2018. [Press release]. [Accessed 3 June 2018]. Available from: https://www. gartner.com/newsroom/id/3872933 Genpact. 2017. The consumer: Sees AI benefits but still prefers the human touch. New York: Genpact Research Institute. Gergen, K.J. 1994. Realities and Relationships: Soundings in Social Construction. Cambrigde: Harvard University Press. Godoe, P. and Johansen, T.S. 2012. Understanding adoption of new technologies: Technology readiness and technology acceptance as an integrated concept. Journal of European Psychology Students. 3(1), pp. 38–52.
46
List of References
Greenwald, T. 2011. How Smart Machines Like iPhone 4S Are Quietly Changing Your Industry. Forbes. [Online]. 13 October. [Accessed 3 June 2018]. Available from: https:// www.forbes.com/sites/tedgreenwald/2011/10/13/how-smart-machines-like-iphone-4sare-quietly-changing-your-industry/#2ec071fe598f Halsey, E.D. 2017. What Does AI Actually Cost? 30 May. Medium. [Online]. [Accessed 13 August 2018]. Available from: https://medium.com/source-institute/what-does-ai-actually-cost-af6a3e5a1795 Harding, L. and Barden, L. 2011. Deep Blue win a giant step for computerkind. The Guardian. [Online]. 12 May. [Accessed 13 August 2018]. Available from: https://www. theguardian.com/theguardian/2011/may/12/deep-blue-beats-kasparov-1997 Harris, M.C. 2010. Artificial Intelligence. New York: Marshall Cavendish. Hennink, M. et al. 2011. Qualitative Research Methods. London: Sage. Hof, R.D. 2013. Deep Learning With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart. MIT Technology Review. [Online]. 23 April. [Accessed 12 August 2018]. Available from: https://www.technologyreview.com/s/513696/deep-learning/ Huber, J.-A. et al. 2018. Cutting Through Complexity In Financial Crimes Compliance. Forbes. [Online]. 14 February. [Accessed 8 June 2018]. Available from: https:// www.forbes.com/sites/baininsights/2018/02/14/cutting-through-complexity-in-financial-crimes-compliance/#3800b232588d Hulick, K. 2016. Artificial Intelligence: Cutting-edge science and technology. Minneapolis: Abdo. IBM. 2018. Cognitive Solutions Pattern Overview. [Online]. [Accessed 2 June 2018]. Available from: https://cloudpatterns.w3bmix.ibm.com/#862581D00083654D/862581D 00083655E Innovation Center Denmark. 2018. Defining Artificial Intelligence. [Online]. [Accessed 3 June 2018]. Available from: http://www.icdk.us/aai Keller, J. 2018. From digitization to algorithmisation: How a Chatbot can combine RPA, AI and ERP. [Online]. [Accessed 9 June 2018]. Available from: https://www.capgemini. com/consulting-de/2018/01/how-a-chatbot-can-combine-rpa-ai-and-erp/ Kessler, S.K. and Martin, M. 2017. How do potential users perceive the adoption of new technologies within the field of Artificial Intelligence and Internet-of-Things? MSc thesis, Lund University. King, N. and Horrocks, C. 2010. Interviews in Qualitative Research. London: Sage. Knight, W. 2017. The Dark Secret at the Heart of AI. MIT Technology Review. [Online]. 11 April. [Accessed 2 September 2018]. Available from: https://www.technologyreview. com/s/604087/the-dark-secret-at-the-heart-of-ai/ Kobsa, A. 2002. Personalized Hypermedia and International Privacy. Communications of the ACM. 45(5), pp. 64–67. Kopaničová, J. and Klepochová, D. 2016. Consumers in New Millennium: Attitudes towards Adoption of New Technologies in Purchasing Process. Studia commercialia Bratislavensia. 9(33), pp. 65–74. Kroker, M. 2018. Künstliche Intelligenz auf Rang 4 der Prioritätenliste von Firmen – vor Digitalisierung. 26 April. Wirtschaftswoche. [Online]. [Accessed 11 June 2018]. Available from: http://blog.wiwo.de/look-at-it/2018/04/26/kuenstliche-intelligenz-auf-rang-4-der-prioritaetenliste-von-firmen-vor-digitalisierung/
List of References
47
Kuepper, D. et al. 2018. AI in the Factory of the Future. [Online]. [Accessed 10 June 2018]. Available from: https://www.bcg.com/publications/2018/artificial-intelligence-factory-future.aspx Lee, A.S. and Baskerville, R.L. 2012. Conceptualizing Generalizability: New Contributions and A Reply. MIS Quarterly. 36(3), pp. A1–A7. Lincoln, Y.S. et al. 2011. Paradigmatic Controversies, Contradictions, and Emerging Confluences. In: Denzin, N.K. and Lincoln, Y.S. eds. The Sage Handbook of Qualitative Research. 4th ed. Thousand Oaks: Sage, pp. 97–128. Mannino, A. et al. 2015. Artificial Intelligence: Opportunities and Risks. Berlin: Foundational Research Institute. Markoff, J. 2011. Computer Wins on ‘Jeopardy!’: Trivial, It’s Not. New York Times. [Online]. February 16. [Accessed 13 August 2018]. Available from: https://www. nytimes.com/2011/02/17/science/17jeopardy-watson.html Marr, B. 2016. What Is The Difference Between Artificial Intelligence And Machine Learning? Forbes. [Online]. 6 December. [Accessed 3 June 2018]. Available from: https:// www.forbes.com/sites/bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/#b0a37a2742b4 Marr, B. 2017. The Biggest Challenges Facing Artificial Intelligence (AI) In Business And Society. Huffington Post. [Online]. 24 September. [Accessed 10 June 2018]. Available from: https://www.huffingtonpost.com/entry/the-biggest-challenges-facing-artificial-intelligence_us_59afd047e4b0d0c16bb528d3 Marr, B. 2018a. The Key Definitions Of Artificial Intelligence (AI) That Explain Its Importance. Forbes. [Online]. 14 February. [Accessed 13 June 2018]. Available from: https:// www.forbes.com/sites/bernardmarr/2018/02/14/the-key-definitions-of-artificial-intelligence-ai-that-explain-its-importance/#39e9e9094f5d Marr, B. 2018b. The AI Skills Crisis And How To Close The Gap. Forbes. [Online]. 25 June. [Accessed 17 August 2018]. Available from: https://www.forbes.com/sites/bernardmarr/2018/06/25/the-ai-skills-crisis-and-how-to-close-the-gap/#7f02aacd31f3 McClelland, C. 2017. The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning. 4 December. Medium. [Online]. [Accessed 3 June 2018]. Available from: https://medium.com/iotforall/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning-3aa67bff5991 Medium. 2017. Risks of Artificial Intelligence. 31 July. Medium. [Online]. [Accessed 10 June 2018]. Available from: https://medium.com/@thinkingwires/risks-of-artificial-intelligence-bb948fe9aa57 Mendoza, J. 2018. Hybrid Intelligence: the quickest path to A.I. adoption. 10 January. Medium. [Online]. [Accessed 9 June 2018]. Available from: https://medium.com/swlh/ hybrid-intelligence-path-to-ai-adoption-453379f9d9a5 Merriam-Webster. 2018. artificial intelligence. [Online]. [Accessed 2 June 2018]. Available from: https://www.merriam-webster.com/dictionary/artificial%20intelligence Meuser, M. and Nagel, U. 1991. ExpertInneninterviews – vielfach erprobt, wenig bedacht: Ein Beitrag zur qualitativen Methodendiskussion. In: Garz, D. and Kraimer, K. eds. Qualitativ-empirische Sozialforschung. Konzepte, Methoden, Analysen. Opladen: Westdeutscher Verlag, pp. 441–471. Meuter, M.L. et al. 2000. Self-Service Technologies: Understanding Customer Satisfaction with Technology-Based Service Encounters. Journal of Marketing. 64(3), pp. 50–64.
48
List of References
Meyer, D. 2018. AI Has a Big Privacy Problem and Europe’s New Data Protection Law Is About to Expose It. Fortune. [Online]. 25 May. [Accessed 11 June 2018]. Available from: http://fortune.com/2018/05/25/ai-machine-learning-privacy-gdpr/ Mills, C. 2017. Predictive analytics in fraud and AML. Journal of Financial Compliance. 1(1), pp. 17–26. MIT Technology Review Insights. 2016. AI Gets More Real, Thanks to Contextual Deep Learning. MIT Technology Review. [Online]. 4 May. [Accessed 6 June 2018]. Available from: https://www.technologyreview.com/s/601396/ai-gets-more-real-thanks-to-contextual-deep-learning/ National Science and Technology Council. 2016. Preparing for the future of artificial intelligence. Washington: Executive Office of the President. Niglas, K. 2010. The Multidimensional Model of Research Methodology: An Integrated Set of Continua. In: Tashakkori, A. and Teddlie, C. eds. The Sage Handbook of Mixed Methods in Social & Behavioral Research. 2nd ed. Thousand Oaks: Sage, pp. 215–236. Nogrady, B. 2016. The real risks of artificial intelligence. BBC. [Online]. 10 November. [Accessed 10 June 2018]. Available from: http://www.bbc.com/future/story/20161110-the-real-risks-of-artificial-intelligence Noonan, L. 2018. AI in banking: the reality behind the hype. Financial Times. [Online]. 12 April. [Accessed 8 June 2018]. Available from: https://www.ft.com/content/b497a1342d21-11e8-a34a-7e7563b0b0f4 Oxford Dictionary. 2018. artificial intelligence. [Online]. [Accessed 2 June 2018]. Available from: https://en.oxforddictionaries.com/definition/artificial_intelligence Partnership on AI. 2018. Frequently Asked Questions. [Online]. [Accessed 18 August 2018]. Available from: https://www.partnershiponai.org/faq/ Pavlou, P.A. et al. 2007. Understanding and Mitigating Uncertainty In Online Exchange Relationships: A Principal-Agent Perspective. MIS Quarterly. 31(1), pp. 105–136. Pfadenhauer, M. 2009. At Eye Level: The Expert Interview – a Talk between Expert and Quasi-expert. In: Bogner, A. et al. eds. Interviewing Experts. London: Palgrave Macmillan, pp. 81–97. Pieters, W. 2011. Explanation and trust: what to tell the user in security and AI? Ethics and Information Technology. 13(1), pp. 53–64. Pijpers, G.G.M. et al. 2001. Senior executives’ use of information technology. Information and Software Technology. 43(15), pp. 959–971. Plastino, E. and Purdy, M. 2018. Game changing value from Artificial Intelligence: eight strategies. Strategy & Leadership. 46(1), pp. 16–22. Poole, D.L. and Mackworth, A.K. 2017. Artificial Intelligence: Foundations of Computational Agents. 2nd ed. Cambridge: Cambridge University Press. Porter, M.E. 2008. The Five Competitive Forces that Shape Strategy. Harvard Business Review. 88(1), pp. 78–93. PwC. 2017. Top financial services issues of 2018. US: Financial Services Institute. Ram, S. and Sheth, J.N. 1989. Consumer Resistance to Innovations: The Marketing Problem and its Solutions. Journal of Consumer Marketing. 6(2), pp. 5–14. Rankin, J. 2018. Artificial intelligence: €20bn investment call from EU commission. The Guardian. [Online]. 25 April. [Accessed 13 August 2018]. Available from: https://www. theguardian.com/technology/2018/apr/25/european-commission-ai-artificial-intelligence
List of References
49
Ransbotham, S. et al. 2017. Reshaping Business With Artificial Intelligence. MIT Sloan Management Review. [Online]. 6 September. [Accessed 31 August 2018]. Available from: https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/ Reavie, V. 2018. Do You Know The Difference Between Data Analytics And AI Machine Learning? Forbes. [Online]. 1 August. [Accessed 10 August 2018]. Available from: https://www.forbes.com/sites/forbesagencycouncil/2018/08/01/do-you-know-the-difference-between-data-analytics-and-ai-machine-learning/#23bfb3f35878 Reeves, M. 2018. How AI Will Reshape Companies, Industries, and Nations. [Online]. [Accessed 10 June 2018]. Available from: https://www.bcg.com/de-de/publications/2018/artificial-intelligence-will-reshape-companies-industries-nations-interview-kai-fu-lee.aspx Rogers, E.M. 1995. Diffusion of Innovations. 4th ed. New York: Free Press. Saunders, M. et al. 2016. Research Methods for Business Students. 7th ed. Harlow: Pearson. Scardovi, C. 2017. Digital Transformation in Financial Services. Berlin: Springer. Schneider, S. 2017. Deutsche Bank debuts robo-advisor for retail market. Handelsblatt. [Online]. 12 December. [Accessed 9 June 2018]. Available from: https://chinacircle. handelsblatt.com/deutsche-bank-debuts-robo-advisor-for-retail-market/ Schoeman, W. 2018. Why artificial intelligence is the future of growth. [Online]. [Accessed 26 August 2018]. Available from: https://www.accenture.com/za-en/company-news-release-why-artificial-intelligence-future-growth Shabbir, J. and Anwer, T. 2015. Artificial Intelligence and its Role in Near Future. Journal of Latex Class Files. 14(8), pp. 1–11. Sharma, K. 2017. Everyone is freaking out about artificial intelligence stealing jobs and leading to war — and totally missing the point. [Online]. [Accessed 5 June 2018]. Available from: https://www.businessinsider.de/artificial-intelligence-is-a-useful-toolfor-business-and-consumers-2017-11?r=US&IR=T Solon, O. 2017. Killer Robots? Musk and Zuckerberg Escalate Row over Dangers of AI. The Guardian. [Online]. 25 July. [Accessed 20 June 2018]. Available from: https:// www.theguardian.com/technology/2017/jul/25/elon-musk-mark-zuckerberg-artificial-intelligence-facebook-tesla Sovereign Wealth Fund Institute. 2018. SWFI Robo-Advisor League Table. [Online]. [Accessed 9 June 2018]. Available from: https://www.swfinstitute.org/fund-rankings/ swfi-robo-advisor-league-table/ Steup, M. 2005. Epistemology. [Online]. [Accessed 25 June 2018]. Available from: https:// plato.stanford.edu/entries/epistemology/ Storholm, K. 2018. Financial Services. [Online]. [Accessed 6 June 2018]. Available from: https://www.rolandberger.com/en/Expertise/Industries/Financial-Services/ Symons, S. 2017. When Chatbots meet RPA bots. 19 October. Medium. [Online]. [Accessed 9 June 2018]. Available from: https://medium.com/roborana/when-chatbotsmeet-rpa-bots-c8b22579f49a Taherdoost, H. 2018. A review of technology acceptance and adoption models and theories. Procedia Manufacturing, 22, pp. 960–967. The week. 2017. Stephen Hawking: humanity could be destroyed by AI. The week. [Online]. 7 November. [Accessed 13 August 2018]. Available from: http://www.
50
List of References
theweek.co.uk/artificial-intelligence/86843/stephen-hawking-humanity-could-be-destroyed-by-ai Thim, C. 2017. Technologieakzeptanz in Organisationen: Ein Simulationsansatz. PhD thesis. University of Potsdam. Venkatesh, V. et al. 2003. User Acceptance of Information Technology: Toward A Unified View. MIS Quarterly. 27(3), pp. 425–478. Vogelsang, K. et al. 2013. Theorieentwicklung in der Akzeptanzforschung: Entwicklung eines Modells auf Basis einer qualitativen Studie. In: Alt, R. and Franczyk, B. eds. Proceedings of the 11th International Conference on Wirtschaftsinformatik, 27 February – 01 March 2013, Leipzig. Leipzig: University of Leipzig, pp. 1425–1439. Wahdain, E.A. and Ahmad, M.N. 2014. User Acceptance of Information Technology: Factors, Theories and Applications. Journal of Information Systems Research and Innovation. 6(1), pp. 17–25. Walker, B. and Soule, S.A. 2017. Changing Company Culture Requires a Movement, Not a Mandate. Harvard Business Review. [Online]. 20 June. [Accessed 11 August 2018]. Available from: https://hbr.org/2017/06/changing-company-culture-requires-a-movement-not-a-mandate Washington, M. and Hacker, M. 2005. Why change fails: knowledge counts. Leadership & Organization Development Journal. 26(5), pp. 400–411. Whitby, B. 2009. Artificial Intelligence. New York: The Rosen Publishing. Wu, P.F. 2012. A Mixed Methods Approach to Technology Acceptance Research. Journal of the Association for Information Systems. 13(3), pp. 172–187. Zissner, O. 2018. Ein Wahnsinn: 500 neue Rechtsnormen – nur in 2017! Regulatorik in Zahlen und regulatorisches Management. [Online]. [Accessed 24 August 2018]. Available from: https://www.it-finanzmagazin.de/wahnsinn-500-neue-rechtsnormen-nur-in2017-regulatorik-71749/ Zoellner, J. et al. 2008. Public Acceptance of renewable energies: Results from case studies in Germany. Energy Policy. 36(11), pp. 4136–4141.